Benchmarks and guts sometimes may contradict each other. Like, a benchmark tells that “performance difference is not big”, but guts do tell otherwise (something like “OH YEAH URGHH”). I was wondering why some servers are much faster than other, and apparently different kernels had different I/O schedulers. Setting ‘deadline’ (Ubuntu Server default) makes miracles over having ‘cfq’ (Fedora, and probably Ubuntu standard kernel default) on our traditional workload.
Now all we need is showing some numbers, to please gut-based thinking (though it is always pleased anyway):
avg-cpu: %user %nice %sys %iowait %idle 4.72 0.00 7.95 18.18 69.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s sda 0.00 0.10 91.30 31.30 3147.20 1796.00 rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util 1573.60 898.00 40.32 0.98 7.98 3.65 44.80
avg-cpu: %user %nice %sys %iowait %idle 4.65 0.00 7.62 38.26 49.48 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s sda 0.00 0.10 141.26 38.86 4563.44 2571.03 rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util 2281.72 1285.51 39.61 7.61 42.52 5.38 96.98
Though load slightly rises and drops, the await/svctime parameters are always better on deadline. The box does high-concurrency (multiple background InnoDB readers), high volume (>3000 SELECT/s) read-only (aka slave) workload on ~200gb dataset, on top of 6-disk RAID0 with write-behind cache. Whatever next benchmarks say, my guts will still fanatically believe that deadline rocks.