Storage DRS and increased scalability grabbed the headlines, but some lesser-known VMware vSphere 5 features may...
actually have a greater effect on your VMware infrastructure.
Each release of vSphere is like Christmas: We can’t wait to open our presents and see what‘s inside. After toying around with vSphere 5, I’ve noticed new features and enhancements that haven’t been discussed much but will change the way you design and manage a VMware infrastructure.
With VMFS 5, you can now create 2 TB virtual disks, but you can only use 1 MB block sizes to create new VMFS volumes. For years, VMware administrators had to deal with various block sizes and limit virtual disks sizes. VMFS 5 solves a lot of those problems.
Upgrading an existing VMFS 3 volume to VMFS 5 is easy and non-destructive. (Previously, if you upgraded from an earlier version of VMFS, it would destroy all the data and VMs on the volume.) Upgrading to VMFS 5 will also retain the previously configured block size.
While VMFS 5 supports larger block sizes, certain vStorage APIs for Array Integration features require data stores to have the same block size. One such feature is copy-offload, which offloads certain storage-related functions from the hypervisor to the array. So if your VMFS 3 volumes don’t use the 1 MB block size, it’s probably best to create a new VMFS 5 volume.
VMFS 5 is also compatible with logical unit numbers (LUNs) up to 64 TB without the need for extents, which connect multiple LUNs together.
With the vSphere 5 release, VMware didn’t give much love to networking. But SplitRx is one of the vSphere 5 features that I find interesting. It’s a new method for network packet receive processing, which is the act of processing the packets sent to a Network Interface Card (NIC) from other network devices.
Previously, VMs processed network packets in a single, shared context, which could become constrained. It’s now possible to split the receive packet processing into multiple, separate contexts. (Imagine that packets had to wait in a single line, but now there’s a special VIP line with direct access to the VM.)
With SplitRx Mode, you can specify which virtual NICs (vNICs) process network packets in a separate context, instead of on the traditional, shared network queue. But you can enable SplitRx Mode only on vNICs that use the VMXNET3 adapter.
This vSphere 5 feature also adds host CPU overhead, so be careful how you deploy it. VMware recommends SplitRx Mode for multicast workloads, which have multiple, simultaneous network connections.
Network I/O Control
VMware also enhanced Network I/O Control in vSphere 5 so you can prioritize VM traffic. Introduced in vSphere 4, Network I/O Control allows you to create resource pools and set priorities for host-specific, network-traffic types -- such as Network File System, iSCSI, Management Console and vMotion. But VM traffic was lumped together in a single pool, so you could not prioritize individual VM traffic to ensure that critical workloads received enough network bandwidth.
In vSphere 5, however, that issue is resolved, with new resource pools that are based on 802.1p networking tags. Now you can create multiple VM resource pools that allocate network bandwidth differently to multiple VMs running on a host. This feature is great for multitenant environments or hosts that share a mix of noncritical and critical VMs. It will ensure that important VMs obtain the networking resources they need.
Storage vMotion enhancements
VSphere 5 features a redesigned mechanism for Storage vMotion, making it more efficient. It no longer uses Change Block Tracking to record disk changes during the Storage vMotion process. Instead, Storage vMotion now performs a mirrored write, which means that any writes during a migration are written to both the source and destination disks at the same time. To ensure that both disks stay in sync, the source and destination disk both acknowledge each write.
VMware also made another big enhancement to Storage vMotion: You can now live-migrate a VM that has active snapshots, which wasn’t possible in vSphere 4. It’s a big deal, because Storage vMotion operations will be common in vSphere 5. And the new Storage Distributed Resource Scheduler (DRS) feature will move VMs between data stores on a regular basis to redistribute storage I/O loads.
VMotion is a core technology that many vSphere 5 features rely upon, and VMware made a few performance and usability enhancements to this technology.
Perhaps the biggest upgrade is that vMotion can now utilize multiple physical NICs (pNICs) to perform a migration, instead of just one. Now the VMkernel will utilize all pNICs assigned to VMkernel port groups to automatically load-balance vMotion traffic. VMotion can use up to 16 1 GB pNICs or four 10 GB pNICs to saturate all of the connections, which will greatly increase the speed of migrations.
VMotion will also scale better with the introduction of Metro vMotion. This type of vMotion increases the acceptable round-trip latency to 10 milliseconds between the VMkernel interfaces on each host. Prior to this adjustment, the maximum supported latency was 5 milliseconds, which limited the usability of vMotion to fast local area networks.
Metro vMotion still requires a fairly fast, low-latency network connection between hosts. But it opens the door for using vMotion over extended distances, such as metropolitan area networks, which are typically hosted within a geographical region.
Because the distances between sites in metro networks are usually less than 100 miles, the latency is sufficient to support vMotion. But networks that span longer distances typically have higher latency and still aren’t suitable for vMotion.