Avoiding downtime with VMware Fault Tolerance and High Availability
A comprehensive collection of articles, videos and more, hand-picked by our editors
Now that the initial sound and fury around VMware vSphere 5 and its licensing policy has died down, it’s time to...
focus on the new features. One of the ways the platform really shines is the vSphere 5 storage and availability improvements.
VMware vSphere 5 includes Storage Distributed Resource Scheduler (DRS), which uses Storage vMotion to automatically load balance disks. Adding to the storage enhancements is VMware High Availability (HA), which now has a new heartbeat system based on storage heartbeats. There are so many large and small new features to discover in VMware vSphere 5, but let’s start by taking a look at these vSphere 5 storage improvements.
The case for Storage DRS
Prior to Storage DRS in VMware vSphere 5, it was up to the administrator to place a virtual machine’s disks on the right data stores to ensure sufficient IOPS and space for the virtual disks themselves. It was not uncommon for admins to merely select data stores according to the amount of free space and place virtual machines (VMs) on the one with the most available space.
This method often caused multiple disk-intensive VMs to be placed on the same data store, creating contention. It was also easy for one data store to become oversaturated with IOPS while another went underutilized. Storage DRS, on the other hand, does for IOPS what DRS has been doing for CPU and memory on ESX hosts: It allows an administrator to make an initial placement for a virtual disk, knowing that Storage DRS will move that disk over time as its needs change.
How Storage DRS works
Just like DRS clusters, Storage DRS can apply affinity and anti-affinity rules to ensure that two disk-intensive storage systems don’t create contention. Ideally, Storage DRS will place your VM on the right storage from day one. It takes the guesswork out of which data store to select, and if your system experiences IOPS changes over time, it can relocate the VM to a data store that meets your policy.
Storage DRS groups data stores into a domain where load balancing can take place. It uses Storage vMotion to move VMs from one data store to another in the cluster. Storage DRS triggers a Storage vMotion based on two conditions: when a data store starts to run out of free space or when it detects latency caused by excessive I/O.
Storage DRS uses a trending algorithm that monitors storage load every eight hours. To avoid data stores becoming full, Storage DRS starts evacuating VMs off the system when a data store becomes 80% full. Of course, you can change any of the load balancing parameters that you want.
Because of these data store clusters, you can be less concerned about deciding on which data store to place a VM. Storage DRS is a step toward a world where storage complexity is reduced and we can focus on consumption of only the storage we truly need.
The companion to Storage DRS is another vSphere 5 storage feature called Profile-driven Storage, which creates a policy system for assigning the right type of storage when you’re creating VMs. Some admins don’t put an awful lot of thought into the different characteristics of their storage. Profile-driven Storage gathers a whole range of metrics that specify to your storage using the vSphere Storage APIs for Storage Awareness (VASA). These could include such attributes as disk type, number of spindles, RAID levels and whether replication is enabled.
Once VASA returns these storage attributes to vSphere 5, you can create a storage profile and match it to the IOPS required by a given VM. So, for example, you can create classifications for storage such as platinum, gold, silver and bronze or compliant vs. non-compliant. That way, when the policy is applied to a VM, Profile-driven Storage can separate the data stores that meet your policy from ones that don’t. Using Profile-driven Storage along with Storage DRS is the optimal way to manage vSphere 5 storage.
VMware HA storage intelligence
VMware vSphere 5 came with a complete VMware HA overhaul. A new election process now controls what happens should the master cluster node become unavailable or orphaned from the network.
In previous versions, the administrator had to make sure there were multiple management network paths to avoid a “split brain” scenario as a result of network partition or disconnection. If you didn’t monitor those paths, you ran the risk of getting a false positive -- where a failover occured even though there was nothing wrong with the host.
VMware HA in vSphere 5 has new intelligence that allows the storage network to be used as backup if a host becomes orphaned from the cluster. The host’s connection to both its network and storage must be unavailable before failover can be triggered. That means the storage network and the redundancy it provides create an additional layer of checks and balances.
Data stores used for failover monitoring are referred to as heartbeat data stores, and the HA Clustering dialog boxes allow you to control which data stores you use for that purpose. A heartbeat data store will have a .vSphereHA folder on the root. But remember, intelligence through the storage network is only used when there is a failure of the management network and a host becomes disconnected.
With these VMware vSphere 5 storage features, adoption of vSphere 5 is likely to be smooth for most customers, and provide admins with plenty of new features to work with.