Wasting InnoDB memory

I usually get strange looks when I complain about memory handling inside InnoDB. It seems as if terabytes of RAM are so common and cheap, that nobody should really care about memory efficiency. Unfortunately for me, I do.

Examples:

  • The infamous Bug#15815 – buffer pool mutex contention. The patch for the bug added lots of small mutexes, and by ‘lots’ I mean really really lots – two mutexes (and rwlock structure) for each buffer pool page. That makes two million mutexes for 16GB buffer pool, um, four million mutexes for 32GB buffer pool, and I guess more for larger buffer pools. Result – 16GB buffer pool gets 625MB locking tax to solve a 8-core locking problem. Solution? Between giant lock and armies of page mutexes there lives a land of mutex pools, where locks are shared happily by multiple entities. I even made a patch, unfortunately it gets some ibuf assertion after server restart though at first everything works great :)
  • InnoDB data dictionary always grows, never shrinks. It is not considered a bug, as it isn’t memory leak – all memory is accounted by (hidden) dict_sys->size, and valgrind doesn’t print errors. 1-column table takes 2k of memory in InnoDB data dictionary, a table with few more columns and indexes takes already 10k. 100000 tables, and 1GB of memory is wasted. Who needs 100000 tables? People running application farms do. Actually, there even is a code for cleaning up data dictionary, just wasn’t finished, and is commented out at the moment. Even worse, the fix for #20877 was a joke – reducing the in-memory structure size, still not caring about structure count. And of course, do note that every InnoDB partition of a table takes space there too…

So generally if you’re running bigger InnoDB deployment, you may be hitting various hidden memory taxes – in hundreds of megabytes, or gigabytes – that don’t provide too much value anyway. Well, memory is cheap, our next database boxes will be 32GB-class instead of those ‘amnesia’ 16GB types, and I can probably stop ranting :)

Performance of status commands

Though some may think status command performance is overrated, our code heavily relies on SHOW STATUS LIKE "Thread%", and I like to spam SHOW INNODB STATUS. Add monitoring systems, which rely on ‘SHOW GLOBAL STATUS’ – and performance starts to matter.

And things are a bit abandoned. See, in performance engineering people always look for low hanging fruit to pick, and there’re quite a few:

  • Peter wrote a year ago about performance regression in ‘SHOW STATUS’. Of course, now that Mark has filed a bug – patch is already in there. This bug would byte us very very severely, if we’d run live site against unpatched 5.0 – as it slows down from 0.2ms to 20ms. There’s a small ha-ha in this issue – guess where these two lines come from. ;-)
  • SHOW INNODB STATUS was taking 0.3s just because there were two rarely-looked at variables that needed traversing quite some structures. I filed bugs #36941 and #36942, to remind developers of this problem. Disabling these two counters allowed to run few thousand ‘SHOW INNODB STATUS’ a second, instead of just three.
  • SHOW GLOBAL STATUS traverses statistics for each thread. Doesn’t matter on single-thread development box too much, but is very very unhappy at higher thread concurrency – walking every variable on every thread is quite a task even for modern hardware. Generally some SMP scaling has been added by reducing locking on global variables this way – at the cost of making the command unusable.

INSERT speed, paramy, auto-inc

One of first things I noticed, when I tried to run some 5.0 tests, was that INSERT speed was much slower (no benchmarking at that time, purely subjective speed that loading the dump was somewhat too slow).

After discussing at evening, and some coding in the morning I built paramy – a simple C++ utility, that reads .sql stream, and spreads execution of statements over multiple worker threads.

For benchmarking I took a simple 13m row, 1.6G data, 1.7G index table. Loading it with standard mysql client took 30min (7200r/s). Paramy unfortunately hit auto-inc lock, and didn’t provide too much of performance – data load took 25min (8700r/s). The win here was mostly because everything until InnoDB being parallel, as instead of 100% core use, mysqld went up to ~130%, so it obviously managed to get additional cycles from SMP.

Dropping AUTO-INC from the field description changed everything. Standard mysql load wasn’t much faster, but under paramy load mysqld managed to use ~320% of CPU core, and did the load in 10min (~21500r/s).

The interesting part – no INSERT asks for AUTO-INC values, so in theory one can just disable the locking during the load, and re-enable it same way as unique or foreign key constraint checks are re-enabled after import of data – that way loading of data into InnoDB would be much much faster, especially with many more cores – though then contention moves away to transaction log mutex.

Additional way to reduce contention inside InnoDB may be removing both reading and writing checksums- they have quite big CPU load share. Disabling this at least during bulk imports can be quite useful.

Oh, I mentioned the subjective feeling that 5.0 was slower. I finally benchmarked – 4.0 did the data load in five minutes, and went over 40000 rows/s. I hope this is the last test where 4.0 is twice faster than 5.0.

Shameless ad

The Sun Fire X4240, powered by the AMD Opteron 2200 and 2300 processor series, is a two-socket, 8-core, 2RU system with up to twice the memory and storage capacity of any system in its class. It’s the first and only two-socket AMD Opteron system with sixteen hard drive slots in a 2RU form factor.”

Well, now that I work for Sun, it ends up being a shameless ad and boasting :) But back when I saw information about this product, I wasn’t my first thought was “wow, thats the best machine for scaling up scaled out environments!”.

In web database world people agree that number of spindles (disks!) matters – remember YouTube’s “think disks, not servers” mantra said during the scaling panel at MySQL conference. Before, getting such number of spindles would’ve required external arrays taking space and sucking power (TCO! ;-)

And for us… it probably means we can finally start doing RAID10, instead of RAID0. :-)

By the way, that box even has Quad-Core service processor. Way to go! :)

Trainwreck: external MySQL replication agent

I wanted to work more on the actual project before writing about it, but I’m lazy, and dear community may be not.

At Wikimedia we have one database server which replicates from multiple (like 15!) masters. It even splits replication streams by database, and applying changes in parallel.

All this stuff is done by external replication agent, Trainwreck. It is public-domain software, which was written by River, doesn’t have much documentation, works only on Solaris (River likes Solaris), unless you comment out all process management blocks, which use doors and other Solaris specific API.

It lives in Wikimedia SVN, and can be checked out using:

svn co http://svn.wikimedia.org/svnroot/mediawiki/trunk/tools/trainwreck/

It sits there, maintained just for needs of that specific single server (ok, there might be two or three), so if anyone wants to make it available for broader audience, feel free to fork a project to some community-oriented place, add all nice features you need. :)