Sergey Galushko - Fotolia
Virtualization brings myriad benefits to the enterprise, but the I/O demands of virtual machines can become problematic. Every VM imposes some level of storage traffic, and the resulting I/O operations per second can lead to contention that might impair the performance of important workloads.
Controlling storage IOPS provides administrators with a means to ensure that every VM workload gets access to adequate I/O -- without starving other VMs of vital I/O needs. But storage I/O control isn't perfect in every scenario, so administrators must apply the technology carefully to ensure the best results. Let's highlight the concepts of storage I/O control in VMware vSphere 6.5, and consider some of the potential issues that administrators might encounter.
Storage I/O control is a vSphere feature that helps improve storage quality of service by avoiding or mitigating storage I/O contention between VMs within the environment.
Virtualization designers have long understood that some amount of I/O throttling can be necessary. Without storage I/O control, each VM must access storage resources independently. This can lead to contention as a greater number of VMs try to share limited I/O bandwidth or other system resources, or as busy VMs demand a disproportionately large amount of storage I/O bandwidth -- termed "noisy neighbors." Thus, some storage I/O control scheme has long been part of the vSphere architecture.
Typically, storage I/O control is disabled by default, and must be deliberately enabled per data store through VMware vCenter. Once enabled, storage I/O control will do nothing until storage I/O latency reaches a selected threshold of 30 milliseconds by default. Administrators can configure it to fit the organization's performance needs.
VSphere uses an I/O queue to divide a finite number of I/O "slots" and share them among VMs that need storage access. VMs make requests to the I/O queue, and contention occurs when more VMs are making requests than there are I/O "slots" available. Contention and excessive latency occurs and VMs wait too long for storage access, impairing the performance of affected workloads -- and possibly the greater business.
When excessive latency occurs, storage I/O control limits the number of requests that each VM can submit to the queue. This effectively limits the number of storage requests that VMs can make. Ideally, this should ease storage I/O contention. The storage I/O control mechanism is responsible for detecting storage I/O latency and making all of the calculations needed to throttle I/O queue when needed, and then step back when traffic demands ease.
Older versions of vSphere required administrators to enable and configure storage I/O control for disks and VMs individually -- a time-consuming and potentially error-prone manual process that existed outside of other policy-driven operations within the vSphere environment.
Once administrators enable storage I/O control on a data store and select the proper storage latency threshold, you can apply the policy to VMs or disk files such as .vmdk files.
Automated Space Reclamation improves storage in vSphere 6.5
VMware storage adds new wrinkle with VVOL technology
What is load balancing and how does it work?
Dig Deeper on Selecting storage and hardware for VMware environments
Related Q&A from Stephen J. Bigelow
Fog computing vs. edge computing -- while many IT professionals use the terms synonymously, others make subtle but important distinctions between ... Continue Reading
Learn how load balancing in the cloud differs from a traditional network traffic distribution, and explore services available from AWS, Google and ... Continue Reading
Access management is critical to securing the cloud. Understand the differences between AWS IAM roles and users to properly restrict access to AWS ... Continue Reading