• Home
  • My Favorite Topics
    • Blogging
    • Business
    • Career
    • Cars
    • Consulting
    • Epic Life Quest
    • Iceland
    • Marketing
    • Presenting
    • Productivity
  • My Life Quest
    • Future Achievements
  • About Me
  • My Recent Photos

Reading Critically: VMware Virtual SAN Performance with SQL Server

9 years ago
vendors, virtualization, vmware
17 Comments

vmware-whitepaperSeriously, I’m not trying to pick on VMware documentation, but lately there’s some odd stuff coming out. Last week it was a really bad book, and today it’s a technical white paper from VMware itself – VMware Virtual SAN Performance with Microsoft SQL Server. It’s an easy read – a 9-page PDF – but here’s the highlights.

Think about the white paper’s title for a second: it sounds like it’s going to test how well the storage performs with SQL Server. Virtual SAN – so we’re testing a storage-intensive workload, right? Wrong, because the very first paragraph of the executive summary says:

“Experiments show the Virtual SAN storage sub-system is never the bottleneck and the workload saturates the host CPU while the I/O latency remains constant.”

Oooo, must be a lot of throughput if the CPU goes to 100%, right? Surely they didn’t handcuff the guests with small amounts of CPU power to keep the storage from being a problem. Let’s check out page 4:

“The entire DVD Store benchmark tools, including the query generator and the database backend, were encapsulated in a single virtual machine, which ran the Microsoft Windows Server 2008 R2 operating system and Microsoft SQL Server 2008. The virtual machine was configured with 4 virtual CPUs (vCPUs), and 4GB of memory.”

They ran the front end app and the database on a single VM with 4GB RAM. This isn’t a load test, it’s a grudge match! And it’s utterly pointless because each host has 128GB memory. They’re just leaving the memory sitting idle. Alright, so what kind of throughput were they able to get with these little toys?

“An aggregated “orders per minute” of 77,206 across 12 DVD Store instances”

Wait – what? I’ve never seen a SQL Server benchmark that just added up all of the transactions that a bunch of completely isolated SQL Servers ran, and called that a single score. You wouldn’t run a real store that way – that’s 12 different databases that can’t see each other.

But let’s play along anyway – 77,206 orders per minute divided by 12 databases equals a whopping 107 transactions per second. If you haven’t done benchmarking before, that number is what we in the performance industry call “low.” Even when combined, 1,287 transactions per second isn’t all that impressive – especially when I bet a single properly configured VM with that exact same hardware could eclipse all 12 of these misconfigured guests. (Not to mention the licensing overhead of the setup in this whitepaper.) To put this in perspective, Anandtech got 1,940 orders per second on a single server back in 2009.

I don’t have anything against VMware’s Virtual SAN product – I bet it’s awesome – but this white paper doesn’t do it justice.

vendors, virtualization, vmware
Previous Post
When Consultants Should Fire their Client
Next Post
Pop Quiz: What Do These Things Cost?

17 Comments. Leave new

  • Mathew Walters
    September 3, 2014 7:48 am

    Hey Brent

    Unfortunately the audience for whitepapers published currently doesn’t seem to be the techie using the products.

    One thing I’d like to point out though(reading critically ;)) is around the transactions per second you came up with.

    The problem with the “Orders per minute” metric is they don’t say how many actual database transactions it take to place an order.

    They state
    “…the absolute orders per minute were computed from the cumulative number of transactions that was printed every 10 seconds.”

    There’s no info on what the computation is so for all the reader knows it may actually be 100’s of transactions per order(unlikely I know :))

    It may well be 1 transaction per order and you know the benchmark suite, if so apologies, I certainly haven’t looked at it before 🙂

    Cheers
    Mat

    Reply
    • Brent
      September 3, 2014 8:02 am

      Mat – right, it can be any number of real database transactions, but even at business units, that’s laughably low. I’ve added a link to an Anandtech benchmark from several years ago that was >10x faster per SQL Server.

      Reply
  • James Lupolt
    September 3, 2014 10:00 am

    “The virtual machine was configured with 4 virtual CPUs (vCPUs), and 4GB of memory.”

    If I recall correctly, the VMWare and SQL Server book you reviewed recently also recommended building VMs of a similar size. My guess is that these are cases of telling IT managers what they want to hear: you can consolidate all (or nearly all) your servers into small VMs on shared hypervisors.

    Reply
    • Brent
      September 3, 2014 3:09 pm

      James – yes, that one also recommended 4GB guests for production SQL Servers. I’m really stunned that folks – especially those who do performance tuning work – haven’t discovered the magic of letting SQL Server cache data in memory.

      Reply
  • John Sterrett
    September 3, 2014 11:06 am

    Wow.. that is all I can say. Great catch Brent.

    Reply
    • Brent
      September 3, 2014 3:08 pm

      John – thanks sir!

      Reply
  • Charles
    September 3, 2014 1:27 pm

    Looks like another interesting read.

    Would you recommend using SQLIO to test the subsystem or with a virtual SAN implementation is it a whole new ball game?

    Reply
    • Brent
      September 3, 2014 3:08 pm

      Charles – sure, generally SQLIO is good for stress testing storage.

      Reply
  • Steve
    September 4, 2014 4:43 pm

    My experience of virtual environments, is that some architects and infrastructure guys want want to treat SQL instances like any other application server and provision it with minimal amounts of RAM and CPU. They typically want to squeeze as many VMs out of the hardware as possible, so DB VMs using 4GB RAM is probably music to some people’s ears. Fortunately I’ve not worked anywhere that wanted to over provision RAM.

    Reply
    • Brent
      September 5, 2014 7:39 am

      Steve – yep, I’ve been brought into lots of those environments too. The admins are always stunned at the incredible performance improvements you can get just by provisioning reasonable amounts of memory. Gotta love those easy fixes.

      Reply
  • sql_handle
    September 9, 2014 5:40 pm

    Load generator and database both on a 4gb vm on an esx I host w/128 gb RAM.
    “They’re just leaving the memory sitting idle.”
    Probably not; likely it’s worse than idle (from a load testing standpoint)

    Dollars to donuts ESXi host filesystem cache is the reason that there was very little vSAN traffic in this test config. Writes – especially txlog writes – would flush through. But reads are probably coming in once and staying in that oversized (compared to the vm total RAM) filesystem cache.

    Reply
  • James Lupolt
    September 12, 2014 5:06 am

    Another white paper recommending a low-memory configuration, though not quite as extreme:

    http://h20195.www2.hp.com/V2/GetPDF.aspx%2F4AA4-7945ENW.pdf

    “In this section, there was a single SQL Server 2012 instance, with two databases totaling 1,088 GB of space usage, excluding
    logs. Actual data files space usage was approximately 85%.”

    …

    “– Increasing “max server memory” from 8 GB to 28 GB provided a reported increase of SQL Server Transactions/sec
    of 144%. Continuing to increase the “max server memory” from 28 GB to 48 GB only provided a 19% increase in
    transactions/sec, which is not as efficient a use of server memory resources.”

    Reply
    • Brent
      September 12, 2014 5:12 am

      James – nice find. What makes it really interesting to me is that they don’t drill down at all into what the next bottleneck was. After adding the memory, is it a transaction log bottleneck? An indexing problem? What? Enquiring minds want to know! 😉

      Reply
  • Richard L. Dawson
    August 23, 2018 2:13 pm

    Hi Brent,
    I realize this article is REALLY old in computer years. Have you seen any more recent tests that may actually represent what we do in the real world? The company I work for has unfortunately for me already purchased VMware’s latest iteration of vSAN tech and now I have to figure out what that will mean for our Sql Servers.

    Thanks and have a great day.
    Richard

    Reply
    • Brent
      August 26, 2018 9:14 am

      Richard – no, sorry.

      Reply
  • Nalin
    March 12, 2019 4:53 am

    HI Brent,

    We were recently ( last week) told about the shiny new SAN solution ( I knew about it) from vmware and how well it can run SQL server. I know vmware was appalling at at running our even more appalling school admin software ( based on SQL Server). Do you reckon all flash drive VSAN supporting SQL server among other workloads is now a real contender?
    I doubt but very happy to listen to your experience and expertise please.

    thanks.

    Reply
    • Brent
      March 12, 2019 9:22 am

      Nalin- Yes, that’s exactly the kind of question I help my clients with. Feel free to contact me when you’re ready for consulting help.

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Hi. I’m Brent.

That's me, Brent.

I live in Las Vegas, Nevada. I'm on an epic life quest to have fun and make a difference.

I co-founded Brent Ozar Unlimited to help make your SQL Server go faster. I also maintain sp_Blitz® and the open source First Responder Kit repo.

My current car collection includes a 1986 Ferrari 328 GTS, a 1964 Porsche 356, a 1971 VW Type 3 Squareback, and more.

profile for Brent Ozar on Stack Exchange, a network of free, community-driven Q&A sites

© 2021 Brent Ozar, all rights reserved. Privacy Policy

  • Home
  • My Favorite Topics
  • My Life Quest
  • About Me
  • My Recent Photos