Category Archives: Benchmarks

Intro to HammerDB

One of my favorite benchmarking tools is HammerDB.  In a nutshell, it allows you to run a well evolved SQL benchmark (TPC-C) against your system to gauge performance.  The scenario to picture is an order/warehouse simulator and your metric is how many new orders per minute (NOPM) the system can crank out.

CPU, memory, disk all matter for SQL and so all make a difference in the achieved NOPM.  Other factors include how many virtual users (you configure) are running against the TPC-C and if you want to get networking into the mix, then you can run the benchmark from a client against a remote SQL server.

HammerDB also has nice features such as running in Autopilot mode where you can run, for example, 5 minutes with a single user, then 5 minutes with 2 users, then 4, then 8, then 16, then 32 … and so on.  This mode is my preferred method as I can just setup a benchmark, start it, and just walk away knowing I can just check the logs later for the results.

Another nice capability is the master/slave mode.  Basically, you can coordinate multiple clients to pummel a server simultaneously.  I don’t use it often, but I have used it.

Getting started…

After installing and launching, HammerDB is going to appear like this…


I’m going to be testing on MSSQL Server, so I will close Oracle and select SQL server instead.  Then I simply pick TPC-C.


And then confirm.


Now you will need to create your initial empty TPC-C database.  Simply go into SSMS and create a new database.  I’m calling mine “tpcc”.  You can get fancy and tweak the number of database files and logs, and size them to prevent autogrowth, … etc.  It does help maximize the benchmark numbers.  Remember, you’ll need to be fair and tweak both systems when running a comparison.

HammerDB4  For a first try, just use the model default database.

Now that we have a “tpcc” database on SQL, time to let HammerDB prepare it for testing.  We do this by going to the Schema Build -> Options where we’ll put in the SQL connection string info.

HammerDB5  For your first time, just use the original schema.  Once you start tweaking for higher numbers, building a schema with multiple warehouses will help your cause.  My rule of thumb has been to simply to make the number of warehouses equal to the number of processor cores … 4 processor cores, 4 warehouses.  16 cores, 16 warehouse… etc.  I don’t know if that is ideal, just what I’ve been doing.

Once the schema build options are ready, double-click Build …


Then click Yes


It may take a while, just let it build until you see the TPCC SCHEMA COMPLETE.

HammerDB8a  Congrats, you have a database ready to test against.  Click the red traffic light icon (highlighted) to stop the build.

Now let’s configure a test run…

First, we need to again configure the connection string.  This time for the benchmark.  Open Driver Script -> Options.  Your connection string from the schema build should populate here automatically otherwise you can fix it.

HammerDB9a  I prefer using a timed test and highlighted the options in the screenshot.  The reason is that I typically do use the Autopilot test so I can test a number of virtual users for 5 minutes at a time.  Then I can ultimately plot a chart of NOPM against the number of virtual users to get more of a profile.  Some systems scale up virtual users better than others, and you just need more data to see it.  Use Autopilot to perform the data runs for you.

Double-click Load to load those driver options.

Now, lets go to Autopilot -> Options.

HammerDB10  Enable Autopilot, then configure the length of the test and the number of virtual users.

Runtime: Since our driver options specified a 2 minute ramp up, a 5 minute test run, and Autopilot is specifying a 10 minute interval (10-2-5=3), there is going to be a 3 minute cool down where virtual users are disconnected.  If the virtual users cannot all disconnect within that cool down period, HammerDB will wait before starting up the next run.

Users:  For each run, there will be one control user and the rest will be virtual users against the database.  So, basically if you want 4 virtual users, you configure the number 5 in Autopilot.  My screen shot shows 2 3 5 9 17 33 65 129 257 513 1025 … but actually corresponds to a number of working virtual users of 1 2 4 8 16 32 64 128 265 512 1024.

I always check off the option to log the output and to use unique log names.  Occasionally HammerDB crashes and the scrolling log in the GUI will not be retrievable when that happens.  Use the text log files instead.  For this run, on my system, I already know that 1024 users is too ambitious and will crash the system (here is where I might use master/slave to have multiple instances run together to create the 1024 users).

Double-click Autopilot and let it run.  Stop it anytime using the red traffic light.


A log file output will look something like this …


Using Excel we can simply generate comparisons like this …


I’ve used HammerDB to assist with such comparisons as VMware versus Hyper-V versus bare metal … static memory versus memory overcommit versus dynamic memory … local RAID versus SAN … SQL failover cluster versus availability group … and in this blog I plan to be using HammerDB as a tool to benchmark server core.  I’m curious to see the results!

Core versus Full – Initial install quick stats

Created two VM’s with identical resources. On the first VM, Windows 2012 R2 Core Installation was chosen. On the second VM, Windows 2012 R2 Server with a GUI (Full) was chosen. After installation of both, I also ran updates on both until no more Windows updates were found.

Here are the quick stats:

    Core has more free disk space: 32,077,897,728 bytes to 26,787,229,696 bytes

Core uses less memory: 335 MB to 438 MB

    Core required less updates: 117 fixes to 153 fixes

    Full 2012 R2 initial installCore 2012 R2 initial install

    No surprises at this point. Core has no GUI, so it uses a sliver less memory, about 5GB less drive space, and there is less surface area to patch. Observed (not measured) that Server Core installed faster and completed patching faster too. But we’re only at step one in the life of a new server and saving a few minutes during an the install is probably not the reason Server Core is recommended.

    The fewer number of patches, though, does already start to show how Server Core is going to benefit in patch maintenance over the server lifecycle.

    So, do you reboot a server to apply some critical IE security fix? Hey, admins shouldn’t be browsing the web from servers anyway. But then why is it that IESC is so often found disabled on the servers? Besides that missing patch is going to keep showing up on vulnerability scans, and we do have to rescan until clean, right? Ever have these debates? By incurring less GUI related patches, it seems to me that Server Core also reduces the number of these time wasting debates over the administrator lifecycle.

    So where to go from here? I’m thinking SQL Server installation next then using HammerDB to benchmark a core SQL versus a full SQL. Another scenario I’d also like to try is perhaps creating an IIS farm behind a load balancer using least queue depth to see which server handles more traffic as another benchmark. Hmm, could create some How-to posts with either of those too … stay tuned …