High Performance MySQL: Optimization, Backups, Replication, and More
All
Stack Overflow 23
This Year
Stack Overflow 1
This Month
Stack Overflow 4
Assuming MySQL here, use EXPLAIN to find out what is going on with the query, make sure that the indexes are being used as efficiently as possible and try to eliminate file sorts. High Performance MySQL: Optimization, Backups, Replication, and More is a great book on this topic as is MySQL Performance Blog.
Indexing is key to performance with this number of records, but how you write the queries can make a big difference as well. Specific performance tuning methods vary by database, but in general, avoid returning more records or fields than you actually need, make sure all join fields are indexed (as well as common where clause fields), avoid cursors (although I think this is less true in Oracle than SQL Server I don't know about mySQL).
Hardware can also be a bottleneck especially if you are running things besides the database server on the same machine.
Performance tuning is a very technical subject and can't really be answered well in a format like this. I suggest you get a performance tuning book and read it. Here is a link to one for mySQL http://www.amazon.com/High-Performance-MySQL-Optimization-Replication/dp/0596101716
Books are better than web searches to learn performance tuning for a database. It is a complex subject and varies greatly from datbase to database and even as @OMGPonies said from version to version.
Only My SQL Performance book I found at amazon, don;t know how good it is: http://www.amazon.com/High-Performance-MySQL-Optimization-Replication/dp/0596101716/ref=sr_1_1?ie=UTF8&s=books&qid=1277756707&sr=8-1
This is a great book:
http://www.amazon.com/High-Performance-MySQL-Optimization-Replication/dp/0596101716/ref=sr_1_1?ie=UTF8&qid=1294150323&sr=8-1
Typically you are only as powerful as your weakest database. If the performance of one slows down, then it will typically lock up web requests. Make sure to have great monitoring in place for your DB health and for your webapp health
200 records per table (depending on the number of fields and indexes) is a very small table size. This suggests that you should go back to the drawing board with your design.
100 databases is a lot to keep up with. If you go this route, automate everything! That being said, unless you have billions of records, you don't need this.
Based on the information you have provided, I would suggest scrapping your design and looking for something simpler. If their are external constraints that require this, then hire an operations person with mysql dba skills; what you described is a 10 - 20 hour a week commitment.
In general this is a very subtle issue and not trivial whatsoever. I encourage you to read mysqlperformanceblog.com and High Performance MySQL. I really think there is no general answer for this.
I'm working on a project which has a MySQL database with almost 1TB of data. The most important scalability factor is RAM. If the indexes of your tables fit into memory and your queries are highly optimized, you can serve a reasonable amount of requests with a average machine.
The number of records do matter, depending of how your tables look like. It's a difference to have a lot of varchar fields or only a couple of ints or longs.
The physical size of the database matters as well: think of backups, for instance. Depending on your engine, your physical db files on grow, but don't shrink, for instance with innodb. So deleting a lot of rows, doesn't help to shrink your physical files.
There's a lot to this issues and as in a lot of cases the devil is in the details.
Not PHP specific (as others have mentioned, "advanced" MySQL knowledge should be language-independent), but here you go (from this question and this question):
Understanding MySQL Internals :
High Performance MySQL:
Pro MySQL:
(partial descriptions from Amazon included, see respective product page for more detailed info).
This sounds like a job of a special type of MySQL Replication Topology
The topology I have in mind is called "the Distribution Master," which comes from pages 368-370 of the book High Performance MySQL : Optimizations, Backups, Replication and more under the subheading "Master, Distribution Master, and Slaves."
You will need the following
1 DB Master Server
1 DB Distribution Master (known as a /dev/null Slave, or Blackhole Slave)
Whatever Number of DB Slaves
SETUP OF DB MASTER
server-id=1
log-bin=mysql-bin
expire-logs-days=14
default-storgae-engine=InnoDB
SETUP OF DB DISTRIBUTION MASTER
server-id=2
log-bin=mysql-bin
expire-logs-days=14
default-storage-engine=BLACKHOLE
skip-innodb
SETUP OF DB SLAVES
server-id=3
default-storage-engine=InnoDB
The purpose of ths setup is to have the DB Master only process SQL meant for the DB Master only. The SQL to be replicated to the DB Slaves are handled by the DB Distribution Master. This will alleviate the DB Master processing the transmission of SQL to Slave; that becomes the responsibility of the DB Distribution Master. All Slaves read SQL changes from the DB Distribution Master rather than the DB Master.
Although I do not fully understand your application, this topology should properly support one central database and multiple read slaves without I/O bounding the central database. Your application should organization INSERTs, UPDATEs, and DELETEs to client database as long as each laptop handles a unique set of clients different and distinct from other laptops.
CAVEAT
You may have to do serious testing on the DB Slaves to make sure data doesn't vanish because of the the BLACKHOLE setup. If this occurs, try removing 'default-storage-engine=BLACKHOLE' and reloading everything.
Future questions of this nature should be asked in dba.stackexchange.com
For your first question:
X row(s) fetched in Y s (Z s)
X = number of rows (of course); Y = time it took the MySQL server to execute the query (parse, retrieve, send); Z = time the resultset spent in transit from the server to the client;
(Source: http://forums.mysql.com/read.php?108,51989,210628#msg-210628)
For the second question, you will never ever know how the query performs unless you test with a realistic number of records. Here is a good example of how to benchmark correctly: http://www.mysqlperformanceblog.com/2010/04/21/mysql-5-5-4-in-tpcc-like-workload/
That blog in general as well as the book "High Performance MySQL" is a goldmine.
I would recommend two books I recently read:
Well the best approach to break down your MySQL query is to run the EXPLAIN command as well as looking at the MySQL documentation for Optimization with the EXPLAIN command.
MySQL provides some great free GUI tools as well, the MySQL Query Browser is what you need to use.
When running the EXPLAIN command this will break down how MySQL interprets your query and displays the complexity. It might take some time to decode the output but thats another question in itself.
As for a good book I would recommend: High Performance MySQL: Optimization, Backups, Replication, and More