Long-distance vMotion raises the allowable network round-trip latency that vSphere will allow, vastly extending...
the potential migration distance for virtual machines. This is not a radical change to vSphere 6, but the move opens important new use cases for organizations facing business continuance, disaster preparedness or other pressures. As with any new feature, it's vital for potential adopters to evaluate long-distance vMotion in limited test cases and ensure that it performs as expected before deploying it in production situations.
Requirements for long-distance vMotion
In vSphere 6, long-distance vMotion imposes several additional considerations for both local and remote data centers. In terms of licensing, each site involved in the long-distance migration will need VMware vSphere 6.0 -- or later -- Enterprise Plus edition. The round-trip time (RTT) latency between hosts must be no more than 150 milliseconds (ms). A network bandwidth of at least 250 Mbps is recommended for each simultaneous migration -- if you plan to migrate four VMs together, plan on at least 1 Gbps of network bandwidth. It may also be necessary to modify the way that VM file transfer traffic is placed on the provisioning TCP/IP stack, though detailed guidance is available in VMware documentation.
But there are some other potential wrinkles. Long-distance vMotion also supports migration across vCenter Server instances. This means the migration process will maintain each VM's properties, including the universally unique identifier, performance counters, events, alarms, distributed resource scheduler groups, high-availability configurations, affinity and anti-affinity rules, and so on. The idea is that the VM not only moves from one host server to another, it can actually move all of its related configurations -- including network, management and storage setup -- to another vCenter environment simultaneously.
This means in addition to long-distance migration requirements, both ends of the migration will require vCenter Server 6.0 -- or later -- licensed for Enterprise Plus editions. The use of vSphere Web Client requires that each vCenter Server instance runs in enhanced linked mode, uses the same vCenter single sign-on (SSO) domain, and runs time-synchronized for SSO token verification -- separate SSO domains are permitted when using vSphere APIs for migrations. When migrating computing resources, both vCenter Server instances must share the same VM storage.
Implementing long-distance vMotion should not be particularly difficult for most organizations, but as with any important business capability, it's important to evaluate and test long-distance vMotion in a controlled environment, gauge performance and consider unforeseen consequences of the capability before deploying the technology in production.
Are there other requirements or limitations for long-distance vMotion?
At its heart, the move to long-distance vMotion merely raises the maximum allowable RTT latency between sites from 10 ms to as much as 150 ms. This effectively raises the maximum practical migration distance from several hundred miles to several thousand miles. The basic requirements are straightforward, but it's worth considering some additional implications and reviewing the underlying limitations for all vMotion operations.
Consider the cost and availability of network bandwidth needed to sustain migrations -- especially between vCenter Server instances. The bandwidth recommendation is about 250 Mbps per simultaneous migration, and must be multiplied by each simultaneous migration. This connectivity must be available end to end. In some cases, a business might need to contract dedicated network connections with the carrier. Additional bandwidth and dedicated connections can generate substantial expenses for the business.
In addition, greater geographic distances can usually mean more switching and routing gear for carriers and Internet service providers, increasing the potential for network disruptions. For example, long-distance vMotion can allow workload migrations across continents or oceans, but this improvement to the vSphere feature set doesn't guarantee the network's availability. This can be a problem for businesses that want long-distance migration for important disaster recovery, business continuity or other regulatory obligations, but might forget the implications of network reliability across much longer distances.
And don't forget the basic limitations of all vMotion operations. For example, you cannot vMotion VMs that use raw disks for clustering. Source and destination IP address styles must match, so IPv4 to IPv6 migrations won't work.
Migration can also be prohibited when a VM depends on devices -- such as DVD drives -- that are not accessible on a destination system. Even issues such as performance counters can impair migration, so if performance counters are enabled -- such as CPU performance counters -- a VM can only be migrated to host systems with identical performance counters enabled. VMs with Flash Read Cache can be migrated if the destination also supports such cache. Finally, a VM that requires hardware-based 3D graphics accelerators must also have a graphics card on the destination system -- though automatic rendering settings can use GPU or CPU resources on the destination.
Dig Deeper on vMotion and Storage vMotion
Related Q&A from Stephen J. Bigelow
Full virtualization and paravirtualization both enable hardware resource abstraction, but the two technologies differ when it comes to isolation ... Continue Reading
Organizations can cap their hyper-converged infrastructure costs when they deploy the Azure Stack HCI platform, but once they plug into the cloud, ... Continue Reading
You can implement ESXi on ARM -- or other RISC processors -- in micro and nano data centers. A nano data center is more specialized but also more ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.