Keep apps happy through proper allocation of virtual machine resources

Sizing VMs with sufficient resources and understanding the relationship between components will ensure application performance and user happiness.

As your estate of VMs grows, it is important to understand how various virtual machine resources affect a VM's size, not just within a single VM but between groups of VMs sharing those resources.

Good application performance is no accident. It requires knowledge of the application and of the various layers delivering resources to it. VMs draw their resources from several areas: CPU, memory, network and disk. The relationship between these resources should be considered when planning VMs; otherwise, an application inside the VM will underperform if it has too little -- or too much -- of a particular resource.

A VM will underperform if it has too little -- or too much -- of a particular resource.

CPU

Each vCPU on each VM runs on only one physical core at a time, so faster CPU clock speeds can make for faster VMs. Subsequently, more vCPUs can make for higher-performing applications. A complicating factor is that in an ESXi server, physical CPUs are shared by VMs. The more cores your ESXi server has, the larger share of a core each vCPU will get, so more cores is often better than faster cores.

If your VM needs a lot of CPU time, give it a second vCPU, but keep in mind that more than a couple of vCPUs on a VM may not make the application faster, because only a multithreaded application can effectively use multiple vCPUs.

Worse still, a multi-vCPU VM is harder for the VMkernel to schedule, meaning the application may actually be slower with excess vCPUs. Modern servers often have a huge number of cores, so there is usually ample CPU time for all the VMs on an ESXi server, provided the VMs are sized sensibly.

RAM

RAM is usually the limiting resource in an ESXi server, so the allocation of RAM to VMs requires caution. The VMkernel is very clever with RAM; it will allow VMs to use all of the physical RAM in the ESXi server and will try to avoid VMs taking and then not using the physical RAM.

When physical RAM is entirely used up, the VMkernel must decide which VMs keep their physical RAM and which ones release what they have, a practice called "reclaiming." Any time physical RAM is being reclaimed from a VM, there is risk the VM's performance will suffer. The more RAM that is reclaimed, the bigger the risk.

It is wise to allocate only the RAM that VMs will require to get the job done. Distributing extra RAM increases the risk of reclaim. On the other hand, when the VM operating system uses otherwise unused RAM as a disk cache, this can significantly reduce the performance required from the disk system, so there is a tradeoff.

For database servers and VDI desktops, it can be much more cost-effective to give the VM more RAM -- and run fewer VMs on each ESXi server -- than to buy a higher-performance storage array. Again, the key is to allocate enough RAM for the VM's workload without allocating too much.

Network bandwidth

When we talk about network bandwidth, it means both the bandwidth between the VM and a virtual switch, and the bandwidth out of the virtual switch to the physical world. Those looking to maximize the bandwidth out of the VM should use the VMXNET3 network adapter, which provides the best throughput with the least CPU overhead. You should use the VMXNET3 in all of your VMs when circumstance permits.

For the link to the physical network, make sure the ESXi hosts have the fastest physical NICs available; 10 GbE is a great choice, even if it means you only have a small number of physical NICs, because the 10 GbE speed allows VMs to tolerate bursts of extreme network traffic.

Bear in mind that a VM doing a lot of network transfers will consume CPU time both within the VM and from the VMkernel for every packet sent or received. As such, a VM on a CPU-constrained ESXi server may experience low network throughput because it is being denied CPU time.

Disk performance

Disk performance is often a silent killer of performance. VM disk performance is most often constrained by the number and type of disks in an array and the number of VMs that share those disks. Because centralized shared storage architecture leads to all VM disk access going through one place, it is very easy for the storage controllers and disks in an array to become overloaded, leaving the VMs waiting for the storage.

ESXi accounts for the time a VM spends waiting for disk IO in the same tracker that it uses for the VM's CPU idle time, but the effect on performance is very different. A VM waiting on IO will not do any other work, so a high IO wait time means degraded performance. Careful storage design is imperative to avoid this scenario.

This was first published in March 2014

Dig deeper on VMware performance enhancements

Related Discussions

Alastair Cooke asks:

How do you keep your applications running at top speed?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close