Two identical queries have somewhat different execution times, though data layout, table format, everything… is quite same:
mysql> select count(*) from transtest; +----------+ | count(*) | +----------+ | 10000 | +----------+ 1 row in set (0.00 sec) mysql> select count(*) from transtest; +----------+ | count(*) | +----------+ | 10000 | +----------+ 1 row in set (2.88 sec) < ---- oh noes!!!
The difference is very simple, though often forgotten. Multiversioned (MVCC) tables may have multiple versions (hence the name) of rows.
Even more, the engine will keep all versions of rows, that would be needed for oldest active transaction, just in case it requests some snapshot from the past.
Think of it as of some sort of time machine, just somewhat less efficient :)
If some old transaction is running, heavily edited tables will have more stored row versions.
In above case, second query had to read a million of rows to produce a count, when first one was working on clean environment.
Once the transaction holding old versions was rolled back, the query took 0.00sec again.
Databases (or DataACIDs) will take additional measures, often unexposed, to provide consistent data, so we may treat this as a feature, not a bug.
Conclusion: avoid long running transactions, especially if doing nothing. Oh well, avoid long running transactions with locks too, but that is already different question. If you spot anything sleeping – kill the slacker. At application level. :-)