Selecting VMware storage involves a few important choices: Network File System (NFS), or iSCSI or Fibre Channel block-based storage? Virtual Machine File System (VMFS) or thin-provisioned disks? And what about VMware Storage vMotion?
Once you choose your VMware storage design, learn how to connect ESXi hosts to a SAN, attach NFS storage to hosts and avoid some common thin provisioning mistakes. This guide covers basic VMware storage options, advanced virtualization storage features and third-party tools for VMware storage management.
Many VMware storage admins start with the VMFS for block-based VMware storage. But you can also use NFS file-based storage, or for flexible disk storage space, try thin provisioning. Whichever VMware storage options you choose, you’ll also need to learn how to connect your file system or SAN to ESX/ESXi hosts.
The inner workings of the vStorage Virtual Machine File System
The Virtual Machine File System, the foundation of VMware storage, is a clustered file system that can be used with block-level storage such as iSCSI or Fibre Channel. VMFS runs on each host and manages the file system namespace to regulate access to files by clients. VMware storage management with VMFS is easy, because it can access a single logical unit number (LUN) across several management zones or ESX/ESXi hosts. It’s also important to determine the right size for your VMFS data stores.
Implementing iSCSI storage in a VMware environment
Many VMware admins now choose iSCSI for VMware storage because of its price and performance. Usually cheaper than Fibre Channel, iSCSI uses 10 Gbps Ethernet to improve performance. Plus, there are a few best practices that will keep your iSCSI storage running at its best. It’s best to isolate iSCSI traffic onto its own dedicated network, for instance, and make sure the network interface cards used in your iSCSI virtual switch connect to separate network switches. This approach ensures that you don’t have a single point of failure.
Thin provisioning myth-busters: The benefits of thin virtual disks
Thin virtual disks can grow in size as data requirements increase. Some people believe thin virtual disks offer weaker VMware storage performance, but that’s not necessarily the case. The flexibility of thin provisioning also keeps you from wasting precious disk space. Just be careful not to create too-large virtual disks that consume an entire LUN, especially for virtual machines (VMs) that regularly create or delete large files.
Connecting an ESXi virtualization host to an iSCSI SAN
Virtual machines are automatically saved to an ESXi host’s local VMware storage, but that presents a problem if the host itself fails. Instead, you can separate the host from the disk files by connecting an ESXi host to an iSCSI SAN. Once you connect the host through its Storage Adapters tab, you can also add new VMware storage and configure access to either a SAN or a NFS.
Attaching NFS storage to a VMware ESXi host
Also among the VMware storage options is NFS storage, which can centrally store VM files for ESXi infrastructures. It’s usually cheaper than proprietary SANs, and connecting an ESXi host to NFS storage is fairly easy through the host’s Storage link in the vSphere Client. In the Add Storage wizard, you’ll have to enter properties to connect to the NFS share. And if you want the NFS server to store virtual disk image files, don’t mount the NFS storage as read-only.
Resignaturing VMFS volumes: The forgotten VMware SRM subject
Resignaturing VMFS volumes is important for your disaster recovery efforts. If you use VMware Site Recovery Manager and block-level VMware storage, you probably recover data through VMFS snapshots. But volume replication means you have to rename the VMFS volumes before they’re presented to a LUN or a host. To avoid errors during disaster recovery, it’s a simple matter of modifying the advanced settings in the vSphere Client’s Site Recovery box.
Aside from the variety of VMware storage protocols, there are additional features to aid storage management -- many of which distinguish VMware storage from its competitors. VMware Storage vMotion, for instance, allows for the live migration of disk files between different data stores. But VMware storage poses potential problems. Many admins encounter issues with scalability, performance and agility.
Using VMware Storage vMotion when shared storage is down
With VMware Storage vMotion, you can move VMs from one storage data store to another while the VM is running (as long as your hosts are configured and licensed for VMware vMotion). The destination data store can be any storage volume configured on an ESX/ESXi host, but the host must have access to both the source and target data store. VMware Storage vMotion is especially useful if you have to shut down SANs for any reason. VMs on shared storage can remain available by temporarily moving them to local storage with VMware Storage vMotion.
VMware virtual servers offer advanced shared storage options
A few elements of VMware storage set it apart from competitors’ storage options. For one, VMware allows you to distinguish between VMs and set priorities based on latency or I/O operations per second, meaning you can reprioritize the I/O to a specific VM. Virtualizing I/O for block-level storage is also distinct to VMware storage. And if you use shared storage, you glean the benefits of VMware tools such as Site Recovery Manager, Distributed Resource Scheduler and Storage vMotion.
VMware storage issues may be solved by your array
Some of the most common VMware storage issues stem from server consolidation, which leads to contention for storage and I/O resources. Consolidation also makes it tedious to do administrative tasks such as moving disk files between storage arrays. But now, the vStorage APIs for Array Integration (VAAI) offload VM operations to be handled at the array level, improving scalability and manageability. The construction of zero-data blocks, for instance, can be offloaded to an array, reducing the amount of I/O created by servers and transmitted data.
Along with a variety of VMware storage options, a few popular third-party tools provide VMware storage management. EMC Storage Viewer and NetApp Rapid Cloning Utility can enhance VMware storage by reporting on storage properties and centralizing management. NetApp data deduplication can also help VMware infrastructures reduce the capacity needs of virtual machine disk (VMDK) files, making VMware storage management much simpler.
EMC Storage Viewer: Integrating storage array views
EMC Storage Viewer 2.1, now compatible with vSphere 4.1, is a vStorage plug-in for vCenter. If you run Symmetrix and a Clariion array, Storage Viewer provides an integrated view of VMware storage use and configuration. This VMware storage management tool makes it easy to map data stores to the storage array. You can see LUN properties, snapshots, array paths, and the association between host bus adapter ports and storage processors on the Clariion array -- all in one place.
Cloning VM storage volumes with NetApp Rapid Cloning Utility 3.0
Not only can NetApp Rapid Cloning Utility (RCU) 3.0 copy VMs, but more important for VMware storage, it also allows you to quickly create and mount new data stores without needing access to storage management tools. If you’re a NetApp customer, the utility is free. Once you install NetApp Rapid Cloning Utility, you can create new storage volumes and provision data stores in just a few steps. Then, you can use RCU to manage duplication settings, resize or even destroy those VMware storage volumes. And if you use RCU for cloning virtual machine storage volumes, you can even choose whether to create a new data store to hold the new VMs.
NetApp dedupe users reduce primary storage needs for VMDK files
Using NetApp data deduplication can save storage space by drastically reducing the capacity needs of VMware VMDK files. Data deduplication saves one copy of the server OS and points to that copy, rather than storing an additional one for each VM on the physical server. If you use an NFS mount, it also helps to group VMs running the same OS onto one mount. Finally, running deduplication during off-peak hours will help ensure that you don’t see any performance changes.
This was first published in May 2011