Live Migration vs. vMotion: A guide to VM migration
A comprehensive collection of articles, videos and more, hand-picked by our editors
VMware vMotion introduced live virtual machine migration with vSphere. As vSphere has evolved, so has vMotion --
to be faster and more powerful -- but that doesn't mean that it has no limitations.
What vMotion limitations do I need to consider in designing a vSphere infrastructure?
Let's start with the underlying infrastructure. VMware vSphere is a hypervisor that operates on x86 servers. However, vSphere vMotion can only take place between servers with compatible CPUs. You can use vMotion Enhanced Compatibility to make older and newer server CPUs speak the same language, but virtual machines (VMs) cannot migrate from a server using an Intel CPU to one with an AMD CPU with vMotion.
The network configuration also plays a role in vMotion success or failure. VMware augmented network monitoring and optimization options in vSphere 5.1, allowing, for instance, the admin to more easily resolve network issues caused by switch misconfigurations, which can help keep the network in shape for vMotions. VSphere vMotion only works if roundtrip network latency is below 5 milliseconds. If this network latency requirement proves too strict, admins can implement a latency-aware Metro vMotion feature in vSphere Enterprise Plus edition, which doubles the roundtrip latency limit to 10 milliseconds.
Unlike in older versions, today's vMotion can use more than one physical network interface card (pNIC) to migrate a VM. As of vSphere 5, vMotions can access up to four 10-GB pNICs or 16 1-GB pNICs.
Can VMs vMotion between hosts with different storage?
Shared storage was a longstanding limitation on vMotion until vSphere 5.1. Shared-nothing live migration allows VMs to vMotion from one host to another without direct-attached storage limitations. Now that vMotion offers simultaneous memory and storage migration, VMs can move between vCenter Server instances, as long as the network supports it.
Storage vMotion also evolved with vSphere, moving data from one storage array to another. Prior to vSphere 5.1, Storage vMotion required its own kind of shared storage -- both data stores needed to access the host. Now, Storage vMotion and vMotion can work together to move the VM's memory and its disk to a new host with fewer limitations.
What restrictions do VMware admins have to observe with vMotion?
VMware concurrently limits the number of VMs that can vMotion to eight. This is a great increase from vSphere vMotion 4.0 and earlier, which migrated one VM at a time, but has come under fire because vSphere vMotion's main competitor, Microsoft Hyper-V's live migration, has no cap on concurrent live migrations.
With vSphere 5.1 and a 1 Gbps network connection, up to four VMs can vMotion concurrently per host. If the infrastructure uses a 10 Gbps connection, each host can vMotion up to eight VMs at once.
Think you're up to date on VMware vSphere vMotion? Take the quiz.
Meredith Courtemanche asks:
What version of vSphere are you vMotioning on?
2 ResponsesJoin the Discussion
Related Q&A from Meredith Courtemanche
VMware vCloud Director and vCloud Automation Center manage cloud resources together. But depending on your VMware vCAC version, they hardly interact.continue reading
VMware's virtualization hypervisor goes by ESX/ESXi and vSphere. The difference in when to use ESXi vs. vSphere terms is subtle.continue reading
Many consultants offer VMware Capacity Planner free to interested virtualization admins, but if you dread their sales pitch, consider vBenchmark.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.