For several versions, VMware-based virtual environments have had the ability to automatically balance CPU and memory
resources among the hosts of a vSphere cluster. Using a feature known as Storage Distributed Resource Scheduler (DRS), a VMware virtual environment can also balance disk capacity and performance across the data stores of a data store cluster.
When set properly for each environment, the Storage DRS functionality, which has been available since vSphere 5.0 was released, should result in better storage performance for all virtual machines placed in the data store cluster.
Storage DRS monitors the capacity utilization of all data stores in a data store cluster and uses predictive analysis to determine if performing a Storage vMotion move will result in a more properly balanced environment. To balance the performance of a data store cluster, Storage DRS monitors the I/O latency of the individual data stores, perform a similar analysis and initiate Storage vMotion operations to ensure the best latency across all data stores. Depending on the version of vSphere and the storage arrays involved, Storage DRS will perform checks to determine if two data stores are located on the same physical disks in the array to prevent migration of a VM to another data store that may experience the same performance issues.
Storage DRS makes recommendations or performs Storage vMotion anytime used space exceeds the configured threshold. It evaluates I/O load every eight hours and will make recommendations when it recognizes an imbalance that lasted longer than a couple of hours. This avoids unnecessary migrations because of short-term spikes.
Fine-tuning the Storage DRS details
Once the data store cluster has been created and Storage DRS has been enabled, several settings need to be configured.
The first configuration is the automation level. There are only two options: No Automation, where Storage DRS will make recommendations but any migrations must be initiated by an administrator; or Fully Automated, where Storage DRS will execute Storage vMotion migrations without administrator interaction.
The second configuration is to enable Storage DRS to provide recommendations or perform migrations based upon I/O latency. Disabling this feature tells Storage DRS to only check for data store use.
The third configuration is to set the thresholds that Storage DRS will use as triggers to initiate a change in the environment. The default threshold for used space is 80% and can be set to anything between 50% and 100%. The default threshold for I/O latency is 15ms and can be set anywhere between 5ms and 100ms, though it shouldn't be set higher than the Storage I/O Control congestion threshold. By default, Storage DRS only initiates a Storage vMotion if the utilization difference between the source and destination data store is greater than 5%. This setting can be modified to anything between 1% and 50%. The timing of the imbalance check and the aggressiveness of I/O load balancing can also be set.
Administrators can schedule changes in these settings, which is useful in allowing Storage DRS to be fully automated. It also makes Storage DRS much more aggressive during nonbusiness hours, while only performing critical Storage vMotions during the day.
The fourth configuration is to define anti-affinity rules. By default, the individual virtual disks (VMDKs) of each virtual machine are kept together when performing Storage vMotion operations. If these disks should be kept separate, then a VMDK anti-affinity rule can be created. This will allow the Log, Temp DB and data disks of a database VM to remain separated from one another, which is a best practice for database performance. VMs can also be kept on different data stores by defining a VM anti-affinity rule.
The final configuration screen is the individual virtual machine settings. Each VM can have its own Storage DRS automation level defined. Also, you can disable the default of keeping VMDK files together for specific VMs.