High Performance MySQL: Optimization, Backups, Replication, and More

Category: Programming
Author: Baron Schwartz, Arjen Lentz
All Stack Overflow 23
This Year Stack Overflow 1
This Month Stack Overflow 4


by DavidM   2019-07-21

Assuming MySQL here, use EXPLAIN to find out what is going on with the query, make sure that the indexes are being used as efficiently as possible and try to eliminate file sorts. High Performance MySQL: Optimization, Backups, Replication, and More is a great book on this topic as is MySQL Performance Blog.

by HLGEM   2019-07-21

Indexing is key to performance with this number of records, but how you write the queries can make a big difference as well. Specific performance tuning methods vary by database, but in general, avoid returning more records or fields than you actually need, make sure all join fields are indexed (as well as common where clause fields), avoid cursors (although I think this is less true in Oracle than SQL Server I don't know about mySQL).

Hardware can also be a bottleneck especially if you are running things besides the database server on the same machine.

Performance tuning is a very technical subject and can't really be answered well in a format like this. I suggest you get a performance tuning book and read it. Here is a link to one for mySQL http://www.amazon.com/High-Performance-MySQL-Optimization-Replication/dp/0596101716

by anonymous   2019-07-21

Books are better than web searches to learn performance tuning for a database. It is a complex subject and varies greatly from datbase to database and even as @OMGPonies said from version to version.

Only My SQL Performance book I found at amazon, don;t know how good it is: http://www.amazon.com/High-Performance-MySQL-Optimization-Replication/dp/0596101716/ref=sr_1_1?ie=UTF8&s=books&qid=1277756707&sr=8-1

by anonymous   2019-07-21

This is a great book:


Typically you are only as powerful as your weakest database. If the performance of one slows down, then it will typically lock up web requests. Make sure to have great monitoring in place for your DB health and for your webapp health

200 records per table (depending on the number of fields and indexes) is a very small table size. This suggests that you should go back to the drawing board with your design.

100 databases is a lot to keep up with. If you go this route, automate everything! That being said, unless you have billions of records, you don't need this.

Based on the information you have provided, I would suggest scrapping your design and looking for something simpler. If their are external constraints that require this, then hire an operations person with mysql dba skills; what you described is a 10 - 20 hour a week commitment.

by anonymous   2017-08-20

In general this is a very subtle issue and not trivial whatsoever. I encourage you to read mysqlperformanceblog.com and High Performance MySQL. I really think there is no general answer for this.

I'm working on a project which has a MySQL database with almost 1TB of data. The most important scalability factor is RAM. If the indexes of your tables fit into memory and your queries are highly optimized, you can serve a reasonable amount of requests with a average machine.

The number of records do matter, depending of how your tables look like. It's a difference to have a lot of varchar fields or only a couple of ints or longs.

The physical size of the database matters as well: think of backups, for instance. Depending on your engine, your physical db files on grow, but don't shrink, for instance with innodb. So deleting a lot of rows, doesn't help to shrink your physical files.

There's a lot to this issues and as in a lot of cases the devil is in the details.

by anonymous   2017-08-20

Not PHP specific (as others have mentioned, "advanced" MySQL knowledge should be language-independent), but here you go (from this question and this question):

Understanding MySQL Internals :

Learn how data structures and convenience functions operate, how to add new storage engines and configuration options, and much more

High Performance MySQL:

Learn how to design schemas, indexes, queries and advanced MySQL features for maximum performance, and get detailed guidance for tuning your MySQL server, operating system, and hardware to their fullest potential. You'll also learn practical, safe, high-performance ways to scale your applications with replication, load balancing, high availability, and failover.

Pro MySQL:

Topics include transaction processing and indexing theory, benchmarking and profiling, and advanced coverage of storage engines, data types, subqueries, derived tables, and joins. Also covers MySQL 5's new enterprise features like stored procedures, triggers, and views.

(partial descriptions from Amazon included, see respective product page for more detailed info).

by anonymous   2017-08-20

This sounds like a job of a special type of MySQL Replication Topology

The topology I have in mind is called "the Distribution Master," which comes from pages 368-370 of the book High Performance MySQL : Optimizations, Backups, Replication and more under the subheading "Master, Distribution Master, and Slaves."

You will need the following

1 DB Master Server
1 DB Distribution Master (known as a /dev/null Slave, or Blackhole Slave)
Whatever Number of DB Slaves


  1. Install MySQL 5.1.x
  2. Make sure /etc/my.cnf
  3. Startup MySQL
  4. RESET MASTER; (Clear Binary Logs From DB Master)
  5. Load in Data into Master
  6. GRANT SELECT,REPLICATION SLAVE ON . TO replicator@'%' IDENTIFIED BY 'repl-password';


  1. Install MySQL 5.1.x
  2. Make sure /etc/my.cnf
  3. Startup MySQL
  4. CHANGE MASTER TO MASTER_HOST='IP Address of DB Master', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='repl-password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=106;
  6. GRANT SELECT,REPLICATION SLAVE ON . TO replicator@'%' IDENTIFIED BY 'repl-password';
  7. For every table that is not in information_schema database and not in the mysql database, convert each table to the BLACKHOLE storage engine like this: ALTER TABLE tblname ENGINE=BLACKHOLE;
  8. RESET MASTER; (Clear Binary Logs From DB Distribution Master)


  1. Install MySQL 5.1.x
  2. Make sure /etc/my.cnf
  3. Startup MySQL
  4. CHANGE MASTER TO MASTER_HOST='IP Address of DB Master', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='repl-password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=106;
  5. START SLAVE; (let replication catch up, check SHOW SLAVE STATUS\G)
  7. CHANGE MASTER TO MASTER_HOST='IP Address of DB Distribution Master', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='repl-password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=106;

The purpose of ths setup is to have the DB Master only process SQL meant for the DB Master only. The SQL to be replicated to the DB Slaves are handled by the DB Distribution Master. This will alleviate the DB Master processing the transmission of SQL to Slave; that becomes the responsibility of the DB Distribution Master. All Slaves read SQL changes from the DB Distribution Master rather than the DB Master.

Although I do not fully understand your application, this topology should properly support one central database and multiple read slaves without I/O bounding the central database. Your application should organization INSERTs, UPDATEs, and DELETEs to client database as long as each laptop handles a unique set of clients different and distinct from other laptops.


You may have to do serious testing on the DB Slaves to make sure data doesn't vanish because of the the BLACKHOLE setup. If this occurs, try removing 'default-storage-engine=BLACKHOLE' and reloading everything.

Future questions of this nature should be asked in dba.stackexchange.com

by anonymous   2017-08-20

For your first question:

X row(s) fetched in Y s (Z s)

X = number of rows (of course); Y = time it took the MySQL server to execute the query (parse, retrieve, send); Z = time the resultset spent in transit from the server to the client;

(Source: http://forums.mysql.com/read.php?108,51989,210628#msg-210628)

For the second question, you will never ever know how the query performs unless you test with a realistic number of records. Here is a good example of how to benchmark correctly: http://www.mysqlperformanceblog.com/2010/04/21/mysql-5-5-4-in-tpcc-like-workload/

That blog in general as well as the book "High Performance MySQL" is a goldmine.

by anonymous   2017-08-20

I would recommend two books I recently read:

  1. High Performance MySQL
  2. Refactoring SQL Applications
by anonymous   2017-08-20

Well the best approach to break down your MySQL query is to run the EXPLAIN command as well as looking at the MySQL documentation for Optimization with the EXPLAIN command.

MySQL provides some great free GUI tools as well, the MySQL Query Browser is what you need to use.

When running the EXPLAIN command this will break down how MySQL interprets your query and displays the complexity. It might take some time to decode the output but thats another question in itself.

As for a good book I would recommend: High Performance MySQL: Optimization, Backups, Replication, and More