Intro to HammerDB

One of my favorite benchmarking tools is HammerDB.  In a nutshell, it allows you to run a well evolved SQL benchmark (TPC-C) against your system to gauge performance.  The scenario to picture is an order/warehouse simulator and your metric is how many new orders per minute (NOPM) the system can crank out.

CPU, memory, disk all matter for SQL and so all make a difference in the achieved NOPM.  Other factors include how many virtual users (you configure) are running against the TPC-C and if you want to get networking into the mix, then you can run the benchmark from a client against a remote SQL server.

HammerDB also has nice features such as running in Autopilot mode where you can run, for example, 5 minutes with a single user, then 5 minutes with 2 users, then 4, then 8, then 16, then 32 … and so on.  This mode is my preferred method as I can just setup a benchmark, start it, and just walk away knowing I can just check the logs later for the results.

Another nice capability is the master/slave mode.  Basically, you can coordinate multiple clients to pummel a server simultaneously.  I don’t use it often, but I have used it.

Getting started…

After installing and launching, HammerDB is going to appear like this…

HammerDB1

I’m going to be testing on MSSQL Server, so I will close Oracle and select SQL server instead.  Then I simply pick TPC-C.

HammerDB2

And then confirm.

HammerDB3

Now you will need to create your initial empty TPC-C database.  Simply go into SSMS and create a new database.  I’m calling mine “tpcc”.  You can get fancy and tweak the number of database files and logs, and size them to prevent autogrowth, … etc.  It does help maximize the benchmark numbers.  Remember, you’ll need to be fair and tweak both systems when running a comparison.

HammerDB4  For a first try, just use the model default database.

Now that we have a “tpcc” database on SQL, time to let HammerDB prepare it for testing.  We do this by going to the Schema Build -> Options where we’ll put in the SQL connection string info.

HammerDB5  For your first time, just use the original schema.  Once you start tweaking for higher numbers, building a schema with multiple warehouses will help your cause.  My rule of thumb has been to simply to make the number of warehouses equal to the number of processor cores … 4 processor cores, 4 warehouses.  16 cores, 16 warehouse… etc.  I don’t know if that is ideal, just what I’ve been doing.

Once the schema build options are ready, double-click Build …

HammerDB6

Then click Yes

HammerDB7

It may take a while, just let it build until you see the TPCC SCHEMA COMPLETE.

HammerDB8a  Congrats, you have a database ready to test against.  Click the red traffic light icon (highlighted) to stop the build.

Now let’s configure a test run…

First, we need to again configure the connection string.  This time for the benchmark.  Open Driver Script -> Options.  Your connection string from the schema build should populate here automatically otherwise you can fix it.

HammerDB9a  I prefer using a timed test and highlighted the options in the screenshot.  The reason is that I typically do use the Autopilot test so I can test a number of virtual users for 5 minutes at a time.  Then I can ultimately plot a chart of NOPM against the number of virtual users to get more of a profile.  Some systems scale up virtual users better than others, and you just need more data to see it.  Use Autopilot to perform the data runs for you.

Double-click Load to load those driver options.

Now, lets go to Autopilot -> Options.

HammerDB10  Enable Autopilot, then configure the length of the test and the number of virtual users.

Runtime: Since our driver options specified a 2 minute ramp up, a 5 minute test run, and Autopilot is specifying a 10 minute interval (10-2-5=3), there is going to be a 3 minute cool down where virtual users are disconnected.  If the virtual users cannot all disconnect within that cool down period, HammerDB will wait before starting up the next run.

Users:  For each run, there will be one control user and the rest will be virtual users against the database.  So, basically if you want 4 virtual users, you configure the number 5 in Autopilot.  My screen shot shows 2 3 5 9 17 33 65 129 257 513 1025 … but actually corresponds to a number of working virtual users of 1 2 4 8 16 32 64 128 265 512 1024.

I always check off the option to log the output and to use unique log names.  Occasionally HammerDB crashes and the scrolling log in the GUI will not be retrievable when that happens.  Use the text log files instead.  For this run, on my system, I already know that 1024 users is too ambitious and will crash the system (here is where I might use master/slave to have multiple instances run together to create the 1024 users).

Double-click Autopilot and let it run.  Stop it anytime using the red traffic light.

HammerDB11

A log file output will look something like this …

HammerDB12

Using Excel we can simply generate comparisons like this …

HammerDB13

I’ve used HammerDB to assist with such comparisons as VMware versus Hyper-V versus bare metal … static memory versus memory overcommit versus dynamic memory … local RAID versus SAN … SQL failover cluster versus availability group … and in this blog I plan to be using HammerDB as a tool to benchmark server core.  I’m curious to see the results!

IOPS calculations

Read and Write IOPs

Total the read and write IOPs and calculate the number of recommended spindles. For example, 10K RPM spindles can be assumed to support 130 IOPs, and 15K spindles can be assumed to support 180 IOPs. The total Backend IOPs workload is a function of the front-end (host) workload and the RAID overhead using these formulas:
R5: Raid-Adjusted IOPs = (Disk IOPS * Read Ratio) + ((Disk IOPs * Write Ratio) * 4)
R10: Raid-Adjusted IOPs = (Disk IOPS * Read Ratio) + ((Disk IOPs * Write Ratio) * 2)
For example, an Exchange server with a peak of approximately 389 Read IOPs and 419 Write IOPs in an R10 configuration would need to support approximately 808 back-end disk IOPs, which translates into a minimum of eight 10K spindles or six 15K spindles (this calculation does not take into consideration storage requirements).
15k RPM Drive=180 IOPs
10k RPM Drive=120 IOPs
7200 = 50 IOPs
5400 = 40 IOPs

NTP tips

Checking your time against a reference NTP server

w32tm /stripchart /computer:pool.ntp.org /samples:5 /dataonly

Checking time across multiple systems using ‘psexec’ (from Sysinternals.com)

psexec \\computer1,computer2,computer2 w32tm /stripchart /computer:pool.ntp.org /samples:5 /dataonly

Setting the time services to use NTP and server NTP requests (for all AD servers).

Step 1:

 W32tm /config /syncfromflags:manual /manualpeerlist:pool.ntp.org /reliable:yes /update
W32tm /resync

Step 2:

 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config]
“AnnounceFlags”=dword:00000005
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters]
“Type”=”NTP”
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpServer]
“Enabled”=dword:00000001

If a guest VM, it is best NOT to sync with the Hyper-V host server …

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0

Step 3:

Net stop w32time
Net start w32time

Checking time accross all DCs:
w32tm /monitor

Checking time configuration on a DC:

w32tm /query /configuration

Core versus Full – Initial install quick stats

Created two VM’s with identical resources. On the first VM, Windows 2012 R2 Core Installation was chosen. On the second VM, Windows 2012 R2 Server with a GUI (Full) was chosen. After installation of both, I also ran updates on both until no more Windows updates were found.

Here are the quick stats:

    Core has more free disk space: 32,077,897,728 bytes to 26,787,229,696 bytes

Core uses less memory: 335 MB to 438 MB

    Core required less updates: 117 fixes to 153 fixes

    Full 2012 R2 initial installCore 2012 R2 initial install

    No surprises at this point. Core has no GUI, so it uses a sliver less memory, about 5GB less drive space, and there is less surface area to patch. Observed (not measured) that Server Core installed faster and completed patching faster too. But we’re only at step one in the life of a new server and saving a few minutes during an the install is probably not the reason Server Core is recommended.

    The fewer number of patches, though, does already start to show how Server Core is going to benefit in patch maintenance over the server lifecycle.

    So, do you reboot a server to apply some critical IE security fix? Hey, admins shouldn’t be browsing the web from servers anyway. But then why is it that IESC is so often found disabled on the servers? Besides that missing patch is going to keep showing up on vulnerability scans, and we do have to rescan until clean, right? Ever have these debates? By incurring less GUI related patches, it seems to me that Server Core also reduces the number of these time wasting debates over the administrator lifecycle.

    So where to go from here? I’m thinking SQL Server installation next then using HammerDB to benchmark a core SQL versus a full SQL. Another scenario I’d also like to try is perhaps creating an IIS farm behind a load balancer using least queue depth to see which server handles more traffic as another benchmark. Hmm, could create some How-to posts with either of those too … stay tuned …

Project Bookshelf

Project Bookshelf: The events depicted in this story are fictional. Any similarity to any person living or dead is merely coincidental.

(Moe) Team, let’s get these books off the floor by putting up a bookshelf on that nice sturdy wall over there.
(Kirk & Guy) Sounds great!

(Kirk) We found a bookshelf that should fit. It needs to be assembled with screws though.
(Moe) We have a screw driver.
(Kirk) Oh, it also needs to be mounted to the wall, so we have to drill some holes in the wall.
(Moe) Well, let’s use the Microvendor tool we have then, it’ll drill the holes then just change the bit and it’ll be a screw driver too.
(Kirk) Nice! That’ll work.
(Moe) Bunch of toilets clogged, tell you what, I’ll handle the toilets so you guys can focus on the bookshelf.
(Kirk & Guy) Ok!

Later …
(Moe) Hey how’s it looking … bookshelf up yet? Apparently not.
(Guy) Well, the Microvendor is okay, but I’m used to working with VMvendor drills and VMvendor just seems like a better tool.
(Moe) So?
(Kirk) So the VMvendor sales guy is coming in to show us their tools.
(Moe) ok, fine. We have a leaky roof, call me down when the sales guy is here.

(Kirk & Guy) Sales guy is here!
(sales guy) These VMvendor batteries last longer, and just look at all the VMvendor tools available. Microvendor sucks.
(Moe) Interesting, let’s compare
… test test test
(Moe) Well, seems like the batteries last about the same. And Microvendor has all the same tool selections.
(Kirk & Guy) We really, really just like the VMvendor tools better. Made in the USA, not China. USA! USA! USA!
(Moe) ok, well, how much are the VMvendor tools?
(sales guy) $160,000 the first year and $50,000 yearly maintenance … for the drill. But the other tools can be bundled in at …
(Moe) Stop right there buddy. The Microvendor tools that we already have should be just fine.
(Guy) No VMvendor? Moe, you’re so biased! Everyone uses VMvendor! How could you not choose VMvendor!?! … I quit! (Guy leaves)
(Moe) Uh, what the ..? Well, anyway, let’s get back to this bookshelf… using our Microvendor tools ok?
(Kirk) Yep, fine … although stupid Microvendor caused Guy to leave … open your mind Moe
(Moe) ok, then… Oops, got a call about a zombie rodent infestation. I’ll take care of it, so you can focus on the project

Later …
(Moe) Hey how’s it going? … Still no bookshelf? What happened now?
(Kirk) Well, Microvendor makes this caddy to charge up to 8 batteries at a time. So I had to wall mount it.
(Moe) I see. And what is this tool bench here for?
(Kirk) Well, we may have to trim a shelf one day, so the Microvendor table saw might be useful and needs to be mounted on something solid. So I setup the tool bench.
(Moe) Table saw? … uh, ok Kirk, let’s talk when I get back. I have to go, killer clowns are terrorizing the reception area. Should be able to get the bookshelf up quickly now, right!?
(Kirk) Absolutely!

Later …
(Moe) Damn, what’s up? The books are still all over the floor.
(Kirk) Well, we had an issue. When the table saw runs at the same time that 8 batteries are charging, the circuit breaker pops.
(Moe) Listen Kirk, maybe we should just use the simple Microvendor drill and just get the bookshelf up. Then worry about the tool bench later …
(Kirk) What!? I work to get all these advanced tools in place and you want me to go backwards now? You just don’t like that I’m ahead of you!
(Moe) wow ok. so, what do you suggest?
(Kirk) Let me just get this one issue worked out. I’ll call the electrician.
(Moe) Fine, call.

Later …
(Kirk) Electrician says we need to upgrade the circuit.
(Moe) How long is that going to take?
(Kirk) Not long, we should be ready soon.

Later …
(Moe) Hey, while you were coordinating with the electrician. I got the bookshelf put together and ..
(Kirk) How dare you! Are you competing with me or something?!
(Moe) Relax Kirk, I just want to get the books off the floor
(Kirk) Well, with my setup we’d be able to put up bookshelves way faster than YOU could manually.
(Moe) It took 12 minutes using the Microvendor drill
(Kirk) MY tool bench setup will make it take only 8 minutes, ha!
(Moe) Look, I’m going to start moving the books if you want to help …
(Kirk) Whatever! The electrician is here to change the circuit

Later …
(Moe) I’m almost done with moving the books to the bookshelf… umm, how come the electrician is still here?
(Kirk) Well, I want to make sure that the tool bench is always available. So the electrician is now helping me add a backup generator. This tool bench is a shining example of how we should work all projects!
(Moe) *sigh* but the project was the bookshelf