VMware virtual machines can acquire unintentional, undetected and unnecessary resource limits. Identifying and removing these limits will improve vSphere performance and help VMs meet application owners' expectations.
VMware vSphere allows administrators to limit resource delivery to VMs, but you may have limits in place without knowing it. Health checks on the virtual environment frequently uncover accidental CPU or RAM limits on VMs, degrading vSphere performance for no reason. Once you discover them, you can remove these VM resource limits. But be warned: Getting rid of resource limits, even accidental resource limits, can have negative consequences. In some scenarios, you may want to keep them.
Managing VM resources
VMware vSphere's resource delivery rules ensure a graceful and predictable degradation in performance when resource demand exceeds supply. Implement these rules when every resource is overcommitted in almost every vSphere deployment, and investing in additional physical servers is not an option. Reservation rules guarantee a minimum level of resources so that VMs always deliver on their service-level agreements. Share rules help control what VM gets what resources when demand exceeds supply. These controls maximize the business benefit of a finite amount of physical resources.
Limits are very different; they are the maximum amount of physical resource delivered to a VM, even if there is more CPU or RAM available. A limit prevents the VM from accessing available resources, and the VM will perform worse than it would without the limit. Since they needlessly degrade vSphere performance, limits on VMs are almost always a bad idea.
How to find and change resource limits
Unintended VM resource limits could stem from a bug in vCenter, the management platform for vSphere. There was a time when a VM converted or cloned to a template acquired a RAM limit equal to its configured RAM; consequently, every deployed VM from that template retains that same limit. Even when the VM has more RAM configured, the limit remains at its old value, so the VM obtains no additional useable RAM and performance is unchanged. You can imagine finding that a slow VM is still RAM-constrained and going through the whole process of configuring more RAM a couple of times -- only to see no change in VM performance -- before you give up.
Limits are fairly easy to identify via the vSphere client's Resource Distribution tab. This tab is available on Distributed Resource Scheduler clusters, standalone hosts and resource pools. It lists Shares, Reservations and Limits on all child objects.
To find limits via a command line, open PowerCLI and use the Get-VMResourceConfiguration commandlet. If the CpuLimitMhz and MemLimitGB properties are not -1, then the VM has resource limits. The vCheck script for PowerCLI is awesome; one of the things it reports is all VMs with CPU or RAM limits. If you're writing your own script, you could use the Set-VMResourceConfiguration cmdlet to remove the limits while you're there.
When we remove the limits from all these VMs, they can access more resources. If the infrastructure has ample free CPU and RAM, then resource utilization increases and we improve virtual machine performance. But, if there weren't a lot of spare resources to begin with, you may see VM performance issues elsewhere in the infrastructure. More pressure on a resource-scant infrastructure can give rise to competition. Even in these situations, removing resource limits can unburden storage, because RAM limits make VMs use the paging file or partition. Page-less VMs are less taxing on storage.
Resource limits have their place. If a vSphere user pays only for the resources they consume, limits impose a cost cap on the infrastructure. However, use Resource Pool limits instead of limiting VMs. This is a good way to deliver to the project, business unit or customer only the resources they bought.