so how does mysql generate revenue?

In order to talk in UC2006 I have to somehow enter United States. Being from East means you are treated as immigrant, unless you prove you are not. In order to prove that at consulate you have to bring income tax declarations, bank statements, lists of financial transactions and job approvals.

Oddly, for me questions at the consulate were:

  • How does MySQL AB generate revenue?
  • What is special about MySQL’s products?

Answering those questions seemed to satisfy Uncle Sam. Later I wondered, did they really know or verify, what MySQL is? Asking such questions did seem quite… educated!

So now with my fingerprints stored in databases I’ve got green light to visit US and talk about our experiences there :)

browser wars in mysql community

Um, I pasted a link in #mysql channel on freenode, and in a minute or two I had amazing analysis of browsers used by tech community:

  • around 50% guests were using Safari on MacOS X 10.4
  • one third were using Windows with 50/50 split of Firefox (1.5) and Opera (7.x)
  • Konqueror on Linux had 17% share

6 visits were analyzed, so it reflects the distribution quite accurately. Obviously IE is not used by database professionals.

mediawiki graphic profile

So I used amazing tool named KCacheGrind to parse XDebug output of MediaWiki. Of course, I immediately started refactoring some places in code. It is a random web hit, I don’t know even what it is, but it looks nasty and waiting for a gentle touch. Kill remote interwiki caches! :)

Here is what I got (triple click to zoom):

Here is what I used:


profiling web applications

I’ve experimented a bit with wikipedia’s existing profiling framework lately, and extracted some of bits from that. The way we do it now is having our internal profiler written in PHP, and wfProfileIn() / wfProfileOut() function calls around the code. As there’re quite a lot of profiling entry places, overhead isn’t tolerable at high loads, so it’s turned on only for every 50th page view.

One of interesting entry points is in Database::query(), we have a tiny set of regexps in Database::generalizeSQL, that convert literal arguments into ‘X’ for string and ‘N’ for integer. This way we end up with all similar queries grouped together in profiling report, showing us which queries need some love or bring down our cluster.

Profiler may have different personalities, as result can be written to database table, printed out to a user, or sent over a network packed into UDP packets. I made a special daemon for handling those messages, which is just in-memory hashtable, that is updated on every incoming UDP message and any incoming TCP connection gets full XML export. Later this can be served in human output by reporting script. It is very easy to scale such profiling system, as state preservation is not needed, and XMLs can be aggregated. I guess it was possible to do that with HEAP tables in MySQL, but writing tiny daemons is sooo fun :-)

One of plans now is to rewrite profiler class into C (that would provide ability to run large-scale all-request profiling again), and merge that with our setproctitle extension, which currently allows to see what part of PHP code is being executed:

httpd: wfMsgGetKey [enwiki]
httpd: main-cleanup [dewiki]
httpd: requestfinish
httpd: query: SELECT page_id FROM `page` WHERE page_namespace = 'X' LIMIT N  [enwiki]
httpd: query: SELECT blob_text FROM `blobs` WHERE blob_id = 'X' LIMIT N  [enwiki]

Anyway, do not think of in-code profiling as of a replacement for in-core profilers such as APD or xdebug, as these can tell much more accurately where you’re fast and where you’re slow (sometimes these results may surprise!). But what you can win with code-controlled profiler – a general view for deployed application in distributed environments, that relies not on call-tree, but rather your own defined profiling blocks.

And now we know that :

  • 20-25% of our MediaWiki execution real time is spent waiting for MySQL (3.5ms per query)
  • 16% for memcached (or actually, Tugela, my memcached+BerkeleyDB hack :)
  • 20% of requests to backend are searches, and we spend 70ms average waiting for Lucene and Mono based mwdaemon to respond
  • Saving of an article takes around 0.8s, page view is around 0.1s
  • Average times (and deviations, if someone hacks reporting :) for every query…

In most cases developers without tools do miss real performance hotspots. This is what tools are for! :-)

lost bits in php bytecode

Wikipedia users did get nice errors like:

Fatal error: Call to undefined function: peplaceinternallinks() in
 /usr/local/apache/common-local/php-1.5/includes/Parser.php on line 812


Table 'nlwiki.recemtchanges' doesn't exist (
 UPDATE `recemtchanges` SET ...

In both cases code was absolutely normal and after rogue Apache servers (different ones!) were restarted, those glitches did disappear. No ECC errors reported… what should be blamed? OS? Hardware? Apache? PHP? APC? Ummm….

php4: not supported, use php5

Tim wrote to PHP internals list, asking:

is there any intention to backport this simple but important bugfix to PHP 4?
Many PHP users are still using PHP 4, and it's not a very well advertised fact that
it does not properly support arrays with more than 64K entries.

Markus Berger responds:

Just change to 5.

It seems that MediaWiki HEAD branch will drop php4 support soon.

ways to (not) attract tv viewers

This year December 31st is Saturday. It means that I’m lazy in the morning, procrastinating all tasks and doing whatever is absolutely useless. So I compared two major TV channels in Lithuania, what are they offering for their beloved viewer on New Years Eve. I just took all movies (and some full-length animated ones) they’re showing, did check their IMDB ratings and used some formulas everyone knows to determine, if anyone should watch TV.

Metric LNK TV3
Count: 8.0 9.0
Rating sum: 36.6 47.0
Rating average: 4.6 5.2
Maximum: 6.6 7.4
Minimum: 3.6 3.3
SqDev: 6.1 10.7
Median: 4.3 5.4

First of all, you may end up in terror, as both TVs have sub-4 rating movies. Generally speaking, this is very very bad. On the other hand, average is terribly low as well, though TV3 might have slightly better status. If you get up early in the morning, you may watch ‘Chicken Run’, which is quite good animated flick. You’re saved if you have cable or satellite TV, or if you’ve got enough booze to erase all memories :) I just wonder, how could major TV channels get so much crap…

And this is the proof:

LNK                     IMDB  | TV3                       IMDB
09:45 Groove squad       3.9  | 08:00 Der Weihnachtswolf   5.5
11:10 Good burger        4.2  | 09:45 Chicken Run          7.4
12:50 Mr. Nice Guy       4.8  | 11:10 The Cat In The Hat   3.3
14:25 Joe's Apartment    5.0  | 12:45 Dunston Checks In    4.6
15:55 On Deadly Ground   3.6  | 14:25 Shallow Hal          6.0
19:00 Der Clown          4.1  | 16:35 Nutty Professor II   4.5
20:50 Rush Hour          6.6  | 19:08 Gorgeus              5.4
01:30 Who's your daddy   4.4  | 00:05 Commando             5.7
                              | 01:55 Swimfan              4.6

2005: year one second longer

Instead of celebrating 2006, I’ll better choose to celebrate leap second – last time it happened in 1998. That is the second of total freedom, it is not accounted usually in budgets, calendars or anywhere else. It is announced 6 months before it happens.

On the other hand, you can make those leap seconds happen every year. All you have to do is sit on your office chairs and rotate counter-clockwise. Every rotation will shorten the day by 10-35s. This has to happen in northern hemisphere of course, southerners should rotate clockwise.

And here is the important part, when will those free seconds happen:

  • Helsinki, Vilnius, Cairo: 2005-12-31 21:59:60
  • Paris, Berlin: 2005-12-31 22:59:60
  • UTC: 2005-12-31 23:59:60

So, synchronize your clocks, let’s have some free time!

MySQL 5.0 optimizer: loose scans

MySQL 5.0 among lots of visible features, introduced several neat optimizer improvements, that may give surprising performance in some queries. Loose index scan, or rather index jumping, allows fast aggregate max() or min() operations, as well as distinct row queries.

Let’s have such table:

| a    | b    |
|    1 |    5 |
|    1 |    3 |
|    2 |    4 |
|    2 |    2 |
|    1 |    1 |

A SELECT MIN(b),MAX(b) FROM ournicelittletable GROUP BY a might be executed in different ways. Usually aggregate functions need to build a temporary table in order to sort results, or use index for sorting.

In case of temporary table, aggregation is done by reading all rows from dataset and updating in-memory grouped results. With our tiny sample dataset there would be table created with primary key ‘a’, as well as min() and max() columns. Then a table scan would happen, which would update aggregated values on every in-memory row with bigger for max() or smaller for min() b value found. After full scan is done, this table could be returned directly to user.

If there is an index on fields (a,b), then tight index scan may happen – it would read values in such order: (1,1),(1,3),(1,5),(2,2),(2,4). In this kind of execution engine does not need to build any kind of temporary tables, once it reads all rows with same ‘a’ value, it can return a minimums or maximums to the user, then continue working on other ‘a’ values.

If MySQL 5.0 would find an index on (a,b), it would simply go to each a values, read one row for min(), one row for max(), as that information is already in index tree, and immediately return that to user.

I built a bigger table, with >200k rows, with unique field a, and added 233 groups, specified in b, with 1000 values in c each. One group didn’t have 1000 values, so I tried to find which one:

mysql> select b,max(c),min(c) from tttable group by b  having max(c)<1000;
| b    | max(c) | min(c) |
|  233 |     83 |      1 |
1 row in set (0.01 sec) (no query cache :)

Then I tried same on different engine, which didn’t support loose scans, and query took 0.2s. I repeated executions on both engines, so that data would be cached by OS, but speed difference was quite noticable. Of course, we could check what was done in there. The 0.01s query was executed this way:

*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: tttable
         type: range
possible_keys: NULL
          key: b_2
      key_len: 5
          ref: NULL
         rows: 471
        Extra: Using index for group-by

Please note the ‘Extra’ line, and how many rows were read. As intended, two rows from each group, one for min() and one for max().
Now the 0.2s query did tell me that it was reading 230k rows. It did actually decide that in-memory table would be faster than reading index and did scan whole table. Another operation I tried was SELECT DISTINCT b FROM tttable. MySQL 5.0 executed the query in 0.01s, though the other engine was again more than 15 times slower, just because it had to deal with more rows.

Conclusion: loose index scans help. Take a look at the manual.