The new features and licensing model of VMware vSphere 5 will significantly affect the way IT pros design and manage their data centers.
New features, such as Storage Distributed Resource Scheduler and
If you're interested in vSphere 5, consider these five fundamental changes to make sure you won't rush into a migration unprepared.
1. Memory-based licenses
New licensing guidelines top the list of changes to VMware's flagship product. VSphere 5 licenses come with restrictions on the amount of CPU sockets and memory that you can allocate to virtual machines (VMs), although VMware has lifted the limitation on the number of CPU cores that can be used. Even so, for many IT shops, this major change will affect how they provision virtual machines as well as their licensing costs.
A Standard license allows users to assign 16 GB of memory and one CPU socket to powered-on virtual machines (VMs), regardless of how much physical memory a host has. An Enterprise license permits 32 GB of memory and one socket, and Enterprise Plus allows for 48 GB of memory and one CPU socket. So if a host has Enterprise Plus licenses for two, physical processors, you have 96 GB of memory at your disposal to divide among the VMs.
To increase the amount of memory that you can assign to VMs, you have to either upgrade to a higher licensing tier or buy additional processor licenses for the host. You cannot purchase additional memory packs for a processor license to increase the allotment, however.
Under this new model, all available memory is pooled in vCenter Server. If you purchased Enterprise Plus licenses for four hosts with two sockets each, for example, your allotment is 384 GB (4 x 2 x 48) that can be shared among the virtual machines on those hosts.
These licensing changes will prevent many shops from upgrading to vSphere 5. If you have hosts with a lot of memory, it could get very costly to stay in compliance with the new licensing model. And for users that migrate to vSphere 5, these changes will affect how hosts are architected and managed.
It will be very expensive to scale up hosts by adding large amounts of memory. Instead, it's more affordable to scale out with more hosts and that have less memory. Also, preventing VM sprawl and right-sizing the resource parameters of VMs is even more important, now that a greater premium is placed on memory.
2. Better storage resource management
In vSphere 5, storage resource management greatly improved with the introduction of Storage Distributed Resource Scheduler (DRS) and Profile-Driven Storage.
Storage DRS automatically load balances storage disks and selects the best placement for VMs based on the available disk space and current I/O load. These capabilities remedy problems with DRS and Storage I/O Control in vSphere 4. DRS only considers CPU and memory usage when load balancing, and Storage I/O Control can prioritize and limit I/O on data stores, but it doesn't allow you to redistribute I/O.
Storage DRS can also use Storage vMotion to load balance data stores, based on storage space utilization, I/O metrics and latency. You can create data store clusters, which pool storage resources. Storage DRS manages these resources similar to how DRS manages compute resources in a cluster.
Another highly touted feature is Storage Profiles, which allows you to define classes of storage so VMs are provisioned and migrated to the proper storage type. Many infrastructures have multiple storage data stores with different performance characteristics. Storage Profiles ensures that a VM stays on a class of storage that meets the VM's performance requirements. After all, you wouldn't want critical applications running on slower storage tiers
For example, a profile may specify that a VM must be on a storage class with latency of less than 50 milliseconds or throughput of at least 100 megabytes per second. The vStorage APIs for Storage Awareness allow vSphere to read the performance characteristics of a storage device to determine the data store's class. If a VM is provisioned on a class of storage that doesn't meet the profile's requirements, the VM becomes non-compliant and it's noted in the vSphere Client. The administrator can then take steps to move the workload to a more suitable storage device.
3. Revamped vCenter Server and Web client
You can deploy vCenter Server as a Linux virtual appliance, which should make deployments easier. The appliance maintains all the regular vCenter Server features, except for Linked Mode (which you can access through the vSphere Client).
VCenter Server no longer requires Windows Server, and it comes packaged with the DB2 Express database. It also supports only Oracle or DB2 external databases. These requirements may be very appealing to Linux shops, which won't have to use Microsoft products for vCenter Server.
You can configure vCenter Server through a Web interface, allowing you to set it up on any workstation, regardless of the operating system. It also supports the Adobe Flex Web-based administration interface, giving it greater functionality in the browser. As such, it should lessen an administrator's dependence on the vSphere Client.
VMware has also updated the Web client that can carry out administration tasks. The old Web interface was very simple and it did not have many features beyond simple VM functions, such as powering on/off a VM and connecting to the remote console.
Written in Adobe Flex, the new client has a rich graphical user interface and much more functionality. But it still only manages VMs and it's not meant as a replacement for the vSphere Client. But VMware will continue to add more functionality to the new framework. And at some point, it may succeed the C#, Windows-only vSphere Client.
4. Fault Domain Manager
VMware High Availability (HA) has been completely overhauled and enhanced, but it's much more complicated.
Previously, VMware HA relied on primary nodes (up to five) to maintain the cluster settings and node states. The other hosts were secondary nodes and sent their states to the primary nodes. Communication between the primary and secondary nodes involved heartbeats, which could detect outages.
In the new HA architecture, each host runs a special Fault Domain Manager agent that's independent of the vpxd agent, which is used to communicate with vCenter Server. It also uses a master/slave concept, with one host elected as a master and the other hosts as the slaves. The election uses an algorithm to determine the master and it occurs at several stages: when HA is enabled, when a master fails or is shut down, or when a problem occurs with the management network.
The master monitors all host and VM availability as well as the power state of the protected VMs. The master also manages the list of hosts and protected VMs in the cluster. In the event of a failure, the new architecture can restart VMs faster than previous versions of HA.
Perhaps one of the best changes to HA is that it no longer relies only on the management network to monitor the heartbeats. HA can now use a storage subsystem for communication, in a method known as Heartbeat Datastores.
Heartbeat Datastores are used as a communication only when the management network is lost (e.g., through isolation or network partitioning). VCenter Server automatically chooses two data stores to use for monitoring, but you can also manually select them. Heartbeat Datastores support both Virtual Machine File System (VMFS) and Network File System (NFS) data stores.
5. Goodbye ESX and the Service Console
VMware has talked about retiring ESX for years, and the day is finally here. ESXi is the only hypervisor included in vSphere 5.
If you're used to the ESX service console, managing ESXi will be a big adjustment. There are two major differences between ESX and ESXi: installation and command-line management. Manually installing ESXi is actually much easier, and the wizard is simple compared to ESX. For automatically deploying ESXi, new auto deploy options can Preboot Execution Environment boot and load images for ESXi installations.
As for command line management, you no longer have a full service console. Most of the management is done remotely with the vSphere CLI (now called the vCLI) and the VMware Management Assistant (vMA). The esxcli command has been greatly expanded in vSphere 5 to provide more manageability and it will eventually replace the existing vicfg-* management commands.
If you decide to migrate your existing ESX 4.x hosts to ESXi 5, all of the main configuration information will be preserved.
This was first published in July 2011