With modern servers, a vSphere cluster can have many terabytes of RAM and thousands of gigahertz of CPU time, which...
is lot of resource to use in a single pool.
But vCenter has a feature called a resource pool to manage large vSphere clusters. Administrators can carve a subset of a large cluster into smaller pools with separate resource policies for a business unit or a new project, isolating resource demands from other pools.
On a short-term basis, VMs share the resources of a single ESXi server. On a long-term basis, the VMs share the resources of a DRS cluster made up of multiple ESXi servers. A larger cluster allows more efficient use of resources; allowing a single cluster to serve multiple workloads can be very beneficial.
To get value out of resource pools, an administrator must manage them to make sure they are not misused -- or left static -- to deliver the right resources for a changing workload. Otherwise, there could be issues with reduced VM performance.
The wrong way to use resource pools
One particularly poor use of resource pools is to organize VMs into groups. I'm surprised how often I come across resource pools that are used as a logical grouping for VMs. It seems some administrators don't like changing their inventory view from "Hosts and Clusters," so they use resource pools where they should use folders in the "VMs and Templates" view of the inventory.
Another poor use is a high priority pool with high CPU shares and a low priority pool with low shares; both are set once and neither is ever adjusted as VMs come and go from the pools.
The other classic mistake is to have both resource pools and VMs as children of the same parent pool.
Adding VMs means pool settings need to adjust
As long as there is no shortage of resources, all of the VMs will perform well. However, if the administrator does not adjust the resource pool structure after adding VMs, then performance will decrease.
When there is a shortage of resources, competition grows between objects with the same parent. The cluster is the first parent and all the pools and VMs directly in that cluster compete. Reservations, limits and shares govern what resources are delivered to each child.
Then VMs and pools within the first level of pools compete for resources. A pool with 10 VMs in it and 4,000 shares will receive the same amount of resources as a sibling pool with 4,000 shares that contains 100 VMs. In the 10-VM pool, each VM will get a share of the pool's resource. In the 100-VM pool, each VM will get a much smaller share because there are more VMs competing for the resources delivered to the pool.
Making no change to the pools but moving VMs so the two pools have equal numbers of VMs will mean the VMs get equal resources. Changing the value of the shares to reflect the number of VMs in each pool would also even out the resource delivery. Resource pool settings often need to change when the number or type of VMs inside the pools changes.
Resource pools maintain priorities
A good use of resource pools is to isolate high priority VMs from low priority VMs. A high-priority resource pool might have a reservation equal to the sum of the configured CPU and RAM for all of the VMs in the pool. This setup ensures the high priority VMs get all the resources they are allocated.
In another resource pool the administrator might have a reservation that is half the sum of the VM configuration, meaning fewer guaranteed resources. When VMs are added to or removed from the pools, the reservation on the pool needs to be changed. A better method is to set the reservation on the VM. That way, the reservations will follow a VM if it moves.
Putting a limit on a resource pool
Limits put a ceiling on the total resources available to the VMs in the resource pool. This is a good solution in a situation where developers are allowed to create their own VMs. They can make as many VMs in the pool as they want but cannot consume more resources than the limit. This way the developer VM resources are isolated from the production VMs, even though they all run in one DRS cluster.