While deciding on the best virtual machine setup, a common undertaking for virtualization architects is deciding
how many virtual machines (VMs) to put on each data store. Different virtual machines have different storage needs, with performance as one of the driving factors along with capacity, considerations of availability and disaster recovery (DR).
The general guideline for the number of virtual machines on a data store is 10-15-20. That is, 10 VMs that have high storage demands could share a data store, 15 normal VMs could share another data store and 20 "quiet" VMs could share a third. For most VMs, this is a good standard. However, for the critical VMs with specific storage needs, make sure you satisfy their demands by considering capacity, performance and availability.
Consider limits and plan accordingly
There is a limit to the number of virtual machines you can place on a single data store, but the limit is unfeasibly high. Several thousand virtual machines could be stored on a single data store based on the limits to the number of files a data store can hold.
At the other end of the scale, you could have a data store for each virtual machine or even for each disk in each virtual machine. This would lead to many data stores; even in a small environment, you could easily hit the 255 data store maximum on an ESXi server. Somewhere between these extremes lies a sensible middle range.
One real limit is capacity. There is only so much space to store the virtual machine files. If your data stores are 500 GB in size and your virtual machines are 40 GB in size, then you have space for less than 10 virtual machines per data store.
Additional space considerations
Make sure to account for the VMkernel swap file, which defaults to the same size as the amount of RAM configured on the virtual machine.
Leave some room for snapshot files; even if snapshots are just for virtual machine backups, these files can grow quite a bit.
Configure enough space so alarms for data store free space aren't normally triggered. By default, these alarms will turn amber at 25% free space and red -- the last warning before the real trouble -- at 15%. When free space drops below 1% then all the virtual machines will stop. You don't want that.
Consider the structure of the storage array
Another limit is performance. All the virtual machines that share a data store share the performance of that data store. Specifically, all of the VMs share the performance of the set of disks in the storage array they share. Virtual machines that are critical and have high performance storage requirements will sometimes have data stores all to themselves, while less-critical or lower-performance virtual machines can share a data store with several other virtual machines with lower priorities.
One of the underlying performance limits is that each ESXi server has a queue for each logical unit number (LUN), so all VMs sharing a single LUN will share a single queue and, therefore, a limit to the maximum performance they can share. As long as the virtual machines are getting answers from the storage, taking the request out of the LUN queue faster than new input/output requests are being issued, then the queue will not be a limit.
The storage array must be configured with enough cache and enough disks of sufficient speed and the right RAID level for the load you will place on them. The storage team should be consulted early in this process since storage performance is their specialty.
Stay safe by distributing virtual machines
A further consideration is spreading your risk. Having all your Web servers in one data store means a single administrative error can destroy them all. Spread the virtual machines across more data stores to reduce this danger.
Distributing VMs across data stores also helps ease performance peaks. If all your mail servers get busy at the same time, having them all on one data store may cause an overload. Spreading them across data stores disseminates that peak load.
Another consideration is how long it would take to restore all those virtual machines if the data store was deleted. The more virtual machines that are on the data store, the longer the recovery would take.
Backup presents another challenge
If you are implementing a storage replication-based DR strategy, where the storage array replicates from one data center to another, then this replication will drive the grouping of virtual machines on data stores.
Since whole data stores are replicated and fail over together, you will need to use separate data stores to isolate the groups of VMs needed to fail over or test failover separately. For example, you would have one set of data stores for the CRM system and another set for payroll. This requirement is in addition to the other conditions and often leads to having fewer VMs on each data store.
If your DR strategy involves replicating individual VMs, as vSphere Replication does, then this is not a concern.