Choosing an approach for shared disks in a VMware environment can be a challenge. You can take one of several approaches,...
depending on whom you ask. This article provides an overview of the ESXi shared storage options and makes some recommendations.
Shared or local storage?
An administrator can choose to use shared storage or local storage in a VMware environment. The question of which type to use becomes more important when multiple VMs must access the storage devices simultaneously. If this type of flexibility is required, then that will favor one direction more than the others.
The people involved in storage decisions
In a modern data center, at least three people are involved to make a VM do what it has to do: the VMware administrator, the storage administrator and the administrator who handles the operating system that runs in the VMs. Ask each of them which kind of shared disk devices you need and they'll all come up with a different answer.
Ultimately, the responsibility for storage lies with the storage area network (SAN) administrator. This person makes shared storage available and ensures it is presented to the ESXi hosts. The SAN administrator has two choices: present the storage to the ESXi hosts or present the storage to all of the individual machines that need to use the storage. The latter option isn't very efficient, because the SAN administrator would have to do this work each time a new VM is created. Which means that the SAN administrator will be happy to leave it up to the VMware administrator.
After the SAN administrator makes the storage available to the ESXi hosts, the VMware administrator has a choice of two types of storage devices. He can get a storage pool to create a VMFS file system whereVMDK files are created, or he can use raw devices that can be accessed directly on the SAN. Raw devices don't go through the VMFS file system layer -- and hence lack all its benefits -- while VMDK files do go through the VMFS file system.
It may seem obvious that using a VMDK file on a VMFS file system is the better choice, but that isn't true in all cases. This approach adds more layers between the VM that needs access to the storage and the storage device -- and may decrease the device's flexibility. The most important issue is that VMFS file systems typically are related to an ESXi host, where a raw storage device is related to the SAN only. Therefore, using a raw storage device makes it easier to move a VM that uses shared storage to another ESXi host.
Lastly, there is the administrator of the operating system in the VM -- let's call him the OS administrator. If this person needs just a basic operating system to host his application, he probably doesn't care. But if the application in the VM needs the best possible performance, the OS administrator will need to decide what's happening on the shared storage devices as well.
A third storage option
Apart from the raw storage device that is presented to ESXi and the VMDK file, the OS administrator will probably consider another option: direct access to the storage device from the VM. Connecting a VM directly -- especially to an iSCSI SAN device -- makes the solution less complex, as only two layers are involved. If fewer layers are involved, optimization of performance gets easier. This is true especially for iSCSI devices that have many performance tuning options, but not so much for Fibre Channel devices that should be connected to a physical Fibre Channel interface on the ESXi host.
Keep business needs in mind
There is no right answer when it comes to providing shared storage devices to VMware VMs. It depends on whom you ask and on what the business needs. Because there are so many options, it does make sense to choose a method you prefer as a starting point. Even if you view things from the perspective of an OS administrator, it may be useful to have different ways to access shared storage. From a company perspective, it's not easy to manage a heterogeneous storage topology.