Managing storage for virtual environments: A complete guide
A comprehensive collection of articles, videos and more, hand-picked by our editors
VMware vSphere 5 takes storage management to a new level. Not only does the new version include Storage DRS and storage intelligence in VMware HA, but it also improves the file system and storage APIs, and added a new storage appliance.
I’ll walk you through some of the major VMware storage management improvements in vSphere 5.
VSphere Storage Appliance
VSphere 5 ushered in a new storage appliance, which is available as a separate purchase. Virtual storage appliances are virtual machines (VMs) that run on your hosts and add capabilities to your existing storage. For instance, it’s possible to populate a physical server with cheap direct-attached storage and then have the new vSphere Storage Appliance (VSA) present this storage to the outside world.
You might think performance would be subpar compared to other methods of presenting storage on the network, but the surprising thing is how well these appliances work. Storage virtualization will be commonplace within a few years, with many of the storage vendors creating a layer of virtualization that separates the inbound connection to the storage from the disks, which will allow customers to move these virtual instances from one array to another.
In vSphere 5, multiple VSAs can be clustered natively together, offering up a single NFS export point and eliminating the VSA as a single point of failure. VSA requires ESX 5. Before it can be enabled on a cluster, the installer confirms the hosts that are valid for its use. The installer then aggregates the local storage of the host the appliance is running on, and the VSA Manager and the VSA Cluster Service configure and monitor the appliance’s availability and the NFS export.
Like with vSphere Replication, VSA includes a wizard that imports the appliance and a plug-in to manage the service directly from vCenter. The manager deploys the appliance and automates the mounting of the NFS export to the host – saving the administrator the hassle of using the Add Storage wizard to mount it to each host. Using the VSA alongside vSphere Replication with Site Recovery Manager, you could potentially store your VMs while also offering a DR service -- all without the expense of an enterprise storage array.
Improvements to VMFS
Not content with creating a new storage and replication appliance, VMware has also been busy upgrading existing technologies. VSphere 5 introduces a new version of the Virtual Machine File System (VMFS) that does not require you to power down VMs or use Storage vMotion to upgrade from the older VMFS 3.
VMFS 5 allows for much larger partitions than its predecessor. VMFS 5 abandons the MBR or cylinders-heads-sectors method of returning the size of a VMFS volume. Previously, this limited VMFS to a maximum single size of 2 TB. The only way to create a file system beyond that limit was by sticking them together using a feature called VMFS Extents, prompting many customers to use NFS instead. VMFS 5 marks a shift to using GUID Partition Tables (GPT) within the file system, which allows for the maximum single VMFS partition to be a maximum of 64 TB.
That being said, VMFS 5 doesn’t yet alter the maximum size of a virtual disk, though it does allow for pass-through Raw Device Mappings (RDM) to be 60 TB. RDMs allow a VM direct access to the logical unit number on a storage array.
VMware-Aware Storage APIs
VMware-Aware Storage APIs (VASA) in vSphere 5 supersede the previous integration introduced in vSphere 4.1.
This VMware storage management feature is implemented differently than the vStorage APIs for Array Integration (VAAI). VAAI uses T10 standard “primitives” to drive storage improvements to vSphere, while VASA requires a storage vendor plug-in. This model allows VMware and the storage vendors to cooperate. It also means the storage vendors can present features that are unique to their arrays to the ESX host. In vSphere 5, VASA finally solves an issue from last year, where some storage vendors were supporting a different subset of primitives than their competitors.
Now, all the vendors support the VAAI fourth primitive, “thin provisioning stun.” Thin provisioning is increasingly popular for VMware storage management of data stores and virtual disks. But you have to be careful with storage overcommitment, because you risk writing more blocks of data than you have actual disk space. If you run out of physical disk space, the VMs crash and halt. With “thin provisioning stun,” you get dialog boxes warning of an imminent problem and VMs are “stunned” (stopped) rather than merely crashing.
Another thin provisioning issue that VASA resolves is space reclamation. When a file is deleted in Windows or Linux, it isn’t actually physically removed from the disk. Instead, the file is marked for deletion, and it is eventually destroyed by the creation of new files. In most case this isn’t a problem, but with thin virtual disks on thin data stores, it can result in thin volumes growing uncontrollably. That’s because when files are deleted, the free space is not gracefully handed back to the storage array.
As you might think, this situation does not play well for audit trails and chargeback. Could you imagine being a customer in a private or public cloud, where you would be paying for all the files you had as well as all the files you had deleted?
VMware really improved storage management in vSphere 5, but don’t be deceived, there are numerous changes all over vSphere that make it an epitome of innovation and technical engineering. VMware storage management is just the beginning.