Calculating VMware HA Failover Capacity

Our new ESX cluster has 3 Dell 2950 Servers with 12,16, and 32 GB of ram. From this article I need to look into making them all 32GB.

Most readers probably know that VMware High Availability (or VMware HA) is the feature of VMware Infrastructure 3 that allows for virtual machines (VMs) to be rebooted on another available host in the event of an unexpected host failure.  In these types of scenarios, a physical host goes down unexpectedly, typically due to hardware failure, and with it go a bunch of VMs.  With VMware HA, these downed VMs will reboot on a different physical server in the HA cluster, thus minimizing downtime.

I had always considered that the “failover capacity,” i.e., how the number of VMs that could be supported in an HA cluster with a failed host, was calculated by VMware HA in an intelligent fashion similar to that used by VMware Distributed Resource Scheduling (VMware DRS).  In other words, VMware HA would look at the needs of the downed VM, consider what is available across the various hosts, and then place virtual workloads accordingly.  Sadly, that is not the case.

This article, titled HA Failover Capacity, by a VMware technical support engineer—“VMwarewolf”—provides more detailed information on how failover capacity is actually calculated.  What actually happens is that VMware HA calculates a number of “slots” based on the least amount of RAM installed in a server in the cluster divided by the most amount of RAM configured for any VM in the cluster.  In the article, the example is given of a server that has 16GB with at least one VM that is configured for 2GB of memory.  That would create 8 slots (16GB / 2GB = for VMware HA.

That in and of itself is bad enough, since not all VMs will require 2GB, but here’s where it gets worse.  After calculating the number of “slots” available on the smallest server in the cluster, it then extrapolates the total number of slots in the cluster using the number from that smallest server.  So if one server in the HA cluster has 16GB but the remaining three have 64GB, all four servers will be treated as having only 16GB for the purposes of calculating HA “slots”.  So, instead of the three bigger servers coming up with 32 slots, they’ll show up as having 8 slots.  Ouch!

Be sure to keep this in mind when creating VMware HA clusters and planning for fault tolerance.

Also, if you aren’t reading VMwarewolf’s stuff, you may want to start.  He (or perhaps she?) is posting some good stuff.

ESX, Virtualization, VMware


Related Articles at blog.scottlowe.org:

  • ESX Server and the Native VLAN
  • My First Articles!
  • Using scponly on ESX Server
  • When an OS is not an OS
  • Full VM Recovery with NetApp Snapshots
  • VM File-Level Recovery with NetApp Snapshots

  • Calculating VMware HA Failover Capacity
    slowe
    Tue, 04 Dec 2007 22:02:22 GMT

    [tags]dell, vmware[/tags]

    7 thoughts on “Calculating VMware HA Failover Capacity

    1. Thanks for the post. I have an ESX farm of about 12 servers but just recently started doing HA. Luckily all servers are of the same spec, but even after taking the VMWare classes was not aware that the HA wouldn’t calculate each server independently. Good to know!

    2. Pingback: vmware ha
    3. Our new ESX cluster has 3 Dell 2950 Servers with 12,16, and 32 GB of ram. From this article I need to look into making them all 32GB

      =O. That is some ridiculous processing power! I picked up a semi dedicated server and all I got was 2gb of ram dedicated!

      Johns last blog post..Malvern Real Estate

    4. Wow I can’t even fathom using that much computing power at this stage…

      Rap Beatss last blog post..Make rap beats 2

    5. That is some serious computing power! Could play some great games on that =D

      PPC Search Engine Marketings last blog post..Pay Per Click Search Engine Marketing

    Leave a Reply to Rap BeatsCancel reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.