VMware vSphere 4.1 HA and DRS clustering improvements

Vsphere 4.1 introduces improved clustering affinity rules for Distributed Resource Scheduler, High Availability (HA) Application Monitoring and HA health status, among others.

In order to create either a VMware High Availability (HA) or Distributed Resource Scheduler (DRS) cluster, you

need vCenter configured and ready to rock 'n' roll. You should know that the new version of vCenter is 64-bit. This has been on the cards for some time now, and I personally made the move to 64-bit from top-to-bottom at the beginning of this year to be ready for the new release.

The shift to a 64-bit vCenter means an increase in scalability with new configurable maximums of up to 3,000 virtual machines (VMs) per cluster and up to 320 VMs per ESX host. Previous versions of HA and DRS had configurable maximums which were at odds with each other; in vSphere4.1 these numbers have been aligned. Remember that if you using VMware View with the Linked Clones Composer Service, it currently limits you to only eight ESX host per cluster.

These new configurable maximums limits apply in HA events. More specifically, the limits impose themselves in a post-failover scenario. For example, let's pretend you had a two-node HA cluster. The maximum number of VMs allowed on each ESX hosts would be just 160 VMs, not 320. You would still need enough "headroom" to tolerate the loss of one of the ESX hosts without exceeding the 320 VMs per host maximum.

Of course, the likelihood of anyone remotely coming near to these theoretical limits is small, unless you have a bank balance the size of small Middle Eastern oil state. The amount of physical RAM needed to get 320 VMs on a single ESX host would be so cost prohibitive it would probably be more economic to scale out the solution, rather than scale up.

High Availability health status

Casting these rather conceptual issues aside, let's delve into more detail about the new features in vSphere 4.1. One of the first changes you will see in VM is the introduction of a new "health status" style-option on the Summary tab of the HA cluster. This "Cluster Operational Status" dialog box is the single location for any alarms or alerts about the cluster. The dialog box reads "none" when there are no configuration issues with the cluster.

Additionally, the algorithms surrounding the interplay between HA and DRS have been reloaded to improve the intelligence between the two-core clustering features. In the past, customers reported that they very occasionally saw DRS "get it wrong" in the sense that DRS would move VMs based on purely performance criteria with scant regard for the availability anxiety. What this means is, in the past it was possible (if somewhat unlikely) for DRS to place 20 VMs on an ESX host and only put 8 VMs on another.

While that may have been a good idea from a performance standpoint, it could lead to scenarios where DRS itself created an "eggs in one basket" scenario, as DRS didn't distribute VMs to prevent one ESX host from becoming overpopulated (and with a bigger VM count) than another. In this scenario, DRS would have to carry out VMotions to free up resources so HA can power on a VM.

Additionally, VMware outfitted HA with a brand new "Application Monitoring" component in vSphere4.1

High Availability Application Monitoring

From my discussions with VMware, it seems clear that the new Application Monitoring feature is more an enablement advanced programming interface (API) that will allow third parties to add hooks into the services that the VM is running for more intelligent failover behavior -- along the same lines as NeverFail's vApp for HA technology.

For now, Application Monitoring doesn't do much unless a third-party vendor chooses to adopt these new hooks in VMware HA. It seems clear that VMware is sticking to being guest OS agnostic by offering no in-guest availability solution. Given VMware's commitment to cloud computing it will be interesting to see how long VMware can maintain this guest OS neutrality.

Improvements to Fault Tolerance

As many people expected, Fault Tolerance has been overhauled. VMware has removed some adoption barriers. From a networking perspective the "logging" network process has been improved with increased throughput and decreased CPU overhead. It also supports the new VMXNET3 driver for FT-protected VMs.

VMware FT is now more integrated with the DRS feature; protected VMs benefit from core "initial placement" and load-balancing functionality that they were previously excluded from. This introduces a new requirement for the Enhanced VMotion Capability (EVC) feature to be enabled to allow DRS to properly locate the primary and secondary VMs on hosts to provide better performance.

VSphere 4.1 allows the primary and secondary to reside on ESX hosts that might have different patch levels by the use of FT-specific versioning controls. This allows the vCenter system to properly differentiate between the primary and secondary VM, and should improve the audit trail within the tasks and events component of vCenter.

But the most welcome improvement is compatibility with DRS. Prior to vSphere 4.1, the primary and secondary VMs were excluded from DRS functionality, so once they were place on the relevant ESX hosts, they could only be moved manually by the administrator. With this said, the compatibility issue was more of pain for other features dependent on the full-automation mode that DRS offers, such as maintenance mode and VMware Update Manager. Prior to vSphere 4.1, the administrator would have to manually oversee the process to complete the task; vSphere 4.1 does away with these limitations.

Improved Distributed Resource Scheduler (DRS) affinity rules

VSphere 4.1 introduces improved VM DRS "affinity rules," which allow the administrator to control where a VM runs to specific groups of ESX hosts within a cluster. The ability to say that specific VMs should never reside on the same host (anti-affinity) or should always run together on the same host (affinity) has been around since the days of ESX 3.

What's new is the ability to group VMs by common name and restrict their execution to a specific subset of ESX host with the cluster. These restrictions can be used to ensure companies meet the terms and conditions of various software vendors whose licensing models inhibit the moving of VMs from one host to another. Additionally, the restrictions should allow administrators to make sure that the administrator can split a pool of VMs across a series of blade enclosures or racks, so that the loss of enclosure or entire rack would not jeopardize the availability of an entire application or distributed service.

The feature works by the administrator making DRS Groups for the collection of VMs (VM Groups) and collection of ESX hosts (Host Groups).

Once these groups have been created, they appear as a new rule type called "Virtual machines to Hosts." Once the VM Groups and Host Groups are associated with each other, the administrator is given four logical options:

  1. Must run on these hosts in group,
  2. should run on these hosts in a group,
  3. must not run on these hosts in group, and
  4. should not run on these hosts in group.

DRS affinity rules: Much vs. should
 

Clearly, the last two options here are used to exclude certain hosts from the VMs, but much hinges on the subtle difference between "must" and "should." As you can probably tell, the "must" options are hard requirements that can never be broken, even if DRS or HA attempted to go against those rules.

This means that if an ESX host crashed, and a VM was restarted, it could only be restarted on ESX hosts within its own host group. VMware recommends that these settings only be used when breaking the hard affinity rule would result in the organization breaching a license agreement; in VMware parlance these are viewed as "required" rules.

In contrast, the "should" option is used in the availability scenario where, in the best of both possible worlds, VM1 and VM2 would reside on different enclosures or racks, but in the event of failure they would be allowed to breach the rule to be restarted by HA. VMware refers to these as "preferential" rules that can be breached to allow the correct functioning of DRS, HA and Data Protection Manager (DPM).

The best way to understand the difference between "must" and "should" is to see the first as hard-affinity rule, and the second as a soft-affinity rule. The former is much more rigorously applied than the former. The general recommendation: Use these rules very sparingly, as every affinity/anti-affinity rule you generate limits the opportunities or "slots" for DRS to move a VM to another ESX host to improve its performance.

DRS's sister feature, DPM, received some small enhancements in vSphere 4.1 as well in the shape of a new schedule task option that allows you to apply DPM settings inside or outside of production hours.

As you can see, there are many tweaks and enhancements to the clustering features in vSphere 4.1. The most important are the changes to the underlying algorithms which govern the interaction between DRS and HA. Sadly, you won't see these changes in the graphical interface, so many people may overlook them.

Customers will welcome the greater controls that the new DRS rules offer, but what they would really welcome is if independent software vendors totally abandoned outdated licensing policies created in a Jurassic era, commonly referred to as "physicalization."

 

Mike Laverick (VCP) has been involved with the VMware community since 2003. Laverick is a VMware forum moderator and member of the London VMware User Group Steering Committee. Laverick is the owner and author of the virtualization website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users, and has recently joined SearchVMware.com as an Editor at Large. In 2009, Laverick received the VMware vExpert award and helped found the Irish and Scottish VMware user groups. Laverick has had books published on VMware Virtual Infrastructure 3, VMware vSphere4 and VMware Site Recovery Manager.


 

This was first published in July 2010

Dig deeper on VMware new releases and updates

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close