These are some of my notes from some sysbench in-memory r/o testing in past day or so:
- At ‘fetch data by primary key’ benchmark with separate read snapshots at each statement, MySQL shines until ~200 concurrent threads, then performance starts dropping slightly faster than one would want, I think mostly from table cache LOCK_open contention
- auto-commit cost (establishing read snapshot per statement) for SELECTs is ~10% for MySQL, but for PG it can be +50% in plain SQL mode and +130% (!!!!!!!) when using prepared statements (this can be seen in a graph – obviously the global lock PG has during this operation is held for too long and maybe is too costly to acquire.)
- Some benchmarks went up by 10% when using jemalloc
- MySQL could accept 10x more connections per second than PG (15000 vs 1500)
- Most confusing behavior MySQL exhibited was at 100-record range scans in PK order:
- At innodb_thread_concurrency=0 it did around 70k range reads, both fetching data and aggregation (SUM())
- At innodb_thread_concurrency>0 it did only 10k range reads returning data but still was able to do 70k aggregations/s
- PG was doing ~35k ops/s at that test
It seems that at least for systems that do lots of range scans (or joins) I guess, managed concurrency kills performance entirely due to giving up tickets too often, need to review it more (Update: it seems that offending stack is ha_release_temporary_latches being called way too early in the select_send::send_data()).
Getting a read view in innodb also takes kernel_mutex (at least in the versions i’ve looked), so at some point this is likely to be hit as well…. but no doubt LOCK_open hit first.
Hey Domas, interesting figures! Thanks for sharing. I assume this testing was done on MySQL 5.1. Have you considered comparing this with 5.5 at some point? I’d be curious to learn if there are any improvements. Cheers!
I tested a hack to make transaction start fast for high-concurrency and it made a big difference — http://bugs.mysql.com/bug.php?id=49169. This has not been fixed in 5.5
Does PG have a process cache to avoid the overhead of forking a process per new connection?
Why haven’t you filed feature requests yet for any of this?
Mark, I guess you hit that with multiple tables in your testing – cause LOCK_open stands in the way for me. Maybe it is both :-)
PG does not have process cache – there’re external third party proxies, like ‘pgbouncer’ which maintains persistent connections with PG.
As for feature requests, it was Sunday evening ;-)
Lenz, not yet – may start looking at some of the things.
Steward, yup!
Hmm, from the bug I created at http://bugs.mysql.com/bug.php?id=58037
The response is that this has been fixed in 5.5. I think they are trying to eliminate the Facebook patch. :)
Don’t worry, Mark, we will find problems, for sure ;-)