again, on benchmarks

Dear interweb, if you have no idea what you’re writing about, keep it to yourself, don’t litter into the tubes. Some people may not notice they’re eating absolute crap and get diarrhea.

This particular benchmark has two favorite parts, that go with each other together really well:

I didnt change absolutely any parameters for the servers, eg didn’t change the innodb_buffer_pool_size or key_buffer_size.

And..

If you need speed just to fetch a data for a given combination or key, Redis is a solution that you need to look at. MySQL can no way compare to Redis and Memcache. …

Seriously, how does one repay for all the damage of such idiotic benchmarks?

P.S. I’ve ranted at benchmarks before, and will continue doing so.

12 thoughts on “again, on benchmarks”

  1. Who needs benchmarks? Isn’t it enough to claim that your software is fast and scalable and be done with it?

  2. Domas,

    Agree with you on this one

    I wanted to grumble by myself but you are faster.

  3. MY favorite part is this:

    “I could see the unauthenticated users, which meant that the client had connected to MySQL and was doing a handshake using MySQL authentication (using username and password).”

    So the author didn’t set skip_name_resolve, which probably really means he’s just benchmarking his DNS lookup speed.

  4. Nice notice about benchmarks in blogs – most of them are just stupid crap and it’s growing :-/

    If people would spend more time describing benchmarks in more details, not just showing results in tables, probably, benchmarks would be more truthful. Just because they would look deeper and see mistakes.

    1. CREATE TABLE comp_dump(
      k binary(20) DEFAULT NULL,
      v char(32) DEFAULT NULL,
      Primary key(`k`) ) ENGINE=InnoDB;

      set @ct:=1;
      insert into comp_dump select unhex(sha1(@ct:=@ct+1)), ‘aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa’ ;

      repeat the next query n time
      insert into comp_dump select unhex(sha1(@ct:=@ct+1)), ‘aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa’ from comp_dump;

      select count(*) from comp_dump;
      | 2097152 |

      time /Users/stephanevaroqui/local/mysql-5.4.1-beta-osx10.5-x86_64/bin/mysqlslap –create-schema=test -c 10 -i10000 -q benchget.sql
      Benchmark
      Average number of seconds to run all queries: 0.001 seconds
      Minimum number of seconds to run all queries: 0.000 seconds
      Maximum number of seconds to run all queries: 0.007 seconds
      Number of clients running queries: 10
      Average number of queries per client: 1

      real 0m23.337s
      user 0m3.988s
      sys 0m12.059s

      cat benchget.sql
      select v from comp_dump where k=unhex(cast(sha1(cast((2000000 ) as unsigned))as char));

      time /Users/stephanevaroqui/local/mysql-5.4.1-beta-osx10.5-x86_64/bin/mysqlslap –create-schema=test -c 10 -i10000 -q benchset.sql
      Benchmark
      Average number of seconds to run all queries: 0.001 seconds
      Minimum number of seconds to run all queries: 0.000 seconds
      Maximum number of seconds to run all queries: 0.028 seconds
      Number of clients running queries: 10
      Average number of queries per client: 1

      real 0m26.644s
      user 0m3.708s
      sys 0m12.229s
      macbook-pro-de-stephane-varoqui:~ stephanevaroqui$ cat benchset.sql
      update comp_dump set v=’bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb’ where k=unhex(cast(sha1(cast((2000000 ) as unsigned))as char));

        1. :) yes, i really don’t clame it to be one, just curiosity. I just introduce a primary key and a binary field with sha1 at least it is fixing the design problem found in the bench. Just wanted to show that with a proper disign and good setup we could at least divide the numbers expose to the world per a factor 20. More comparing Apple to Orange instead of Apple to Steack :)

Comments are closed.

%d bloggers like this: