Let’s assume that you’ve got a small MySQL database with the budget to match, and the only concern you really have is to get data in and out of it as quickly as possible. Obviously the first question you’re going to have is “How is that different from normal MySQL?” Simple, in this case I don’t really care too much if the data survives. Here’s a more concrete example, let’s say your MySQL database acts as the back end for a system monitoring several sensors that sample data every second or so, it’s not a large project so there’s no budget for hardware, the sensors are all off the shelf components that eventually trigger back to your API and you need to make do with what hardware you have or you’ll blow the budget. In this situation you want to be able to sample data as quickly as possible, and you need to be sure that the server won’t suddenly bog down (so it has to perform), but you don’t really care if the server loses power and you lose a second or so of data every two or three months, as the sensors are running constantly, and a single second effectively means nothing.
There are plenty of other situations like this, WordPress is another example as it runs with MySQL as its back end. What’s the worst case scenario in a situation where the system loses power? That you might lose a row from one the logs? Compared to potentially more than doubling performance (and loading speed on a website directly relates to conversion percentage, time is literally money) it’s a worthwhile tradeoff.
The trade off ultimately comes down to a question of, what’s the worst thing that can happen if we lose power or have the system fail at the worst time, compared to the day to day benefits from additional speed and cost savings. Obviously I’m not suggesting doing anything extreme on systems that actually handle money or similar cases. I’d suggest keeping your blogging database and your payment system completely separate, one needs 5 minute backups and automatic failover and the other doesn’t.
So now that we’ve worked out our goal and risk tolerance, let’s explore the options…
If you’re having performance problems I’m assuming that you’ve already covered the basics in MySQL administration, that is you’ve used the slow query log and the general query log to confirm what queries are running and that the queries are generally hitting indexes for the ones that are running the most often. Additionally that you’re not running huge joins for data that then doesn’t end up getting used anywhere or querying through multiple views where it can be avoided (as this has additional overhead).
I only care about reads
If your database is almost completely reads, like an import that runs in the morning with the latest set of country wide data from your remote services that is then reported on through the day, it’s possible that your cheapest option is to just buy more RAM. For InnoDB the option..
innodb_buffer_pool_size = 1G
Will control how much memory gets assigned to MySQL’s cache at maximum. MySQL won’t use the Operating System’s normal cache for memory and usually runs with a disk-read call that skips normal caching in favour of using its own system to store data. The buffer pool in MySQL is what stores the disk blocks in memory, split up between the most recently read blocks and blocks that have been referenced the most often, 5/8th of the pool for new data, 3/8th for old data by default. One thing to note is that if you’re using row compression, the compressed and uncompressed blocks are stored separately. This means that what you might gain in performance in IO when using compression is lost when storing the data.
However the very quickest way to boost performance is to throw more memory at the problem. In the Classic Azure machines it was cheaper to buy more RAM than to get faster SSDs, so even an unwieldy database can just be loaded into memory directly.
I’d aim for roughly 70% of the system’s maximum memory for a dedicated database system as the number to use here. This is unlikely to scale linearly against performance and you’re more likely to see a stepladder effect as there are jumps as progressively more tables and indexes are able to fit completely in memory. That 30% buffer on a system is for the OS, the other buffers and various background tasks that may fire up and chew memory, one thing you absolutely want to avoid is setting it high enough that the OS starts to page as this will randomly hammer your database performance.
One thing to note is that the buffers aren’t pre-allocated but will scale as data is loaded into memory, so if you assign 10GB to a system that only has 1GB of tables on it, even if every piece of data is read into memory, only 1GB of your RAM is going to be consumed. The value also takes MB as an input.
The problem with in-memory tables
MySQL has a database engine called “MEMORY” that stores the tables it’s created with completely in memory. The main downside, aside from it not being backed up to disk, is that the memory engine doesn’t effectively use VARCHAR and allocates the entire assigned size up front. So while you’d normally do ok with just assigning one or two fields with longer character strings than they always use because it saves some edge cases where large strings are stored there, this is no longer possible with a memory table, as you would run out of memory very quickly.
Additionally the memory engine still uses table level locking instead of row level, which means that only one write can occur on a memory table at a time, although given the latencies involved this is unlikely to make a significant difference either way.
One of the biggest changes you can make is to disable MySQL’s atomic functionality. Atomicity is what insures that a piece of data will either be written completely or not at all. This is accomplished in MySQL by using write flushing between writes, and writing to a temporary log before copying the data into its final location, for example the normal process is..
- Log that a write has started
- Write to the temporary location
- Log that a write has finished
- Write over the old permanent data from the temporary location
- Log that the write or update transaction is complete
The intention behind this is that if power is lost or the system crashes, the next boot of MySQL will look at the log and see what the last recorded status of each transaction was as well as the data in the temporary log, then restore the system to the last known good status, either by completing a write that had not finished copying to its final location, or by deleting a log entry that had not finished writing to the temporary location. Every low write in this series includes a flush command, that pushes the data from RAM to the drive and tells the drive to confirm that it has been written from the drive’s buffers to the drive surface before continuing.
What we want for our performance is a one step process..
- Write the data to RAM to be written at some point
Which involves fewer steps and returns the success code much faster. MySQL will still see the new data, as from its perspective it sees the “good” state of the system which includes in-memory data overlaid over the hard drive, but the data may not yet be on the drive surface. If the system loses power, some rows will be in the old state even if they’ve already confirmed.
For a bank system, this would be a critical problem, but for a blog, who really cares?
The three options you want are…
innodb_flush_log_at_trx_commit = 0
innodb_doublewrite = 0
innodb_flush_log_at_timeout = 3
Flush log at trx commit controls how often your log is written to disk, while doublewrite controls if there’s a log used for storing data. Flush log controls how often MySQL tries to sync memory against the disc.