Problem solve Get help with specific problems with your technologies, process and projects.

VMotion network and storage connectivity issues solved

Using VMotion without attention to storage and network connectivity can create infrastructural management issues. One solution to simplify matters is infrastructure virtualization.

VMotion can be a helpful tool for load balancing, hardware maintenance and performance purposes, but it requires...

understanding how it works with shared storage and the IP network to ease virtual infrastructure management. In this article, I'll discuss the data center infrastructure balancing act facing IT professionals using VMware's VMotion capability, and explain how infrastructure virtualization can offer an easy solution.

Host target management eases storage network problems 
The first networking challenge is on the storage network as customers begin to take advantage of the VMotion capabilities in VMware. The benefit of VMotion is that you can move any virtual machine (VM) to any other host server. But simply having VMotion won't afford this functionality; VMotion requires access to shared storage. Today that can be a Fibre Channel or iSCSI SAN, as well as a Network File System (NFS) mounted File Server. For any migration to work, each host must be able to see and have access to the guest VM's logical unit numbers (LUNs). For example, if you want to move a VM from Host A to Host B, Host B must be able to see and access the LUNs of each of Host A's virtual machines.

If you have a small virtual infrastructure (two VMs per host) then this is not a big deal, but most hosts typically have a double-digit number of VMs, making management difficult. Imagine 10 ESX hosts, each with 10 virtual machines that you want to be able to take and migrate any VM to any host. That is 100 connections to manage.

To solve, or at least lessen the impact of this problem, you either need to limit the amount of target hosts for VMware sessions or extend your virtualization strategy to infrastructure virtualization. Limiting the amount of host targets that VMware can use to move VMs limits its usefulness but still adds complexity. When it comes time to move a virtual machine off of Host A to another host, you need to verify which other hosts have the right connections to accept that host.

This method is still rather complex; even if you limit the environment to moving 10 VMs from Host A to 10 VMs on Host B, that still requires making sure that the storage networking is properly configured for 10 additional LUNs on Host B. Also, assuming you want to be able to move VMs from Host B to Host A, you need 10 properly configured storage network connections there, too. This is in addition to the 10 original network connections that already exist for each VM on their primary host. That is 40 total storage network connections on two ESX hosts. This does not even take into consideration the rest of the environment that may not be virtualized.

VMotion and IP network connections
The same challenges hold true on the IP network. For virtual machines to be able to move between different physical ESX hosts, and still operate correctly, they must all be able to access the same IP network (be on the same physical sub-net).

It is also commonly recommended that the customer use a separate interface for the VMotion network. The most common solution is to physically locate the ESX hosts in the same racks attached to the same switch. But this causes issues with space and power utilization as well as flexibility. In some environments, an entirely separate and private network is created to facilitate the faster traffic, rather than routing over the core networking infrastructure. A better solution is to use VLANs to connect the ESX servers, VLANs to connect the ESX servers, but this adds a fair amount of complexity and requires manual setup and maintenance. Tracking where you migrated a virtual machine to and which switches were affected by that move becomes more difficult.

Infrastructure virtualization as a solution 
A solution to both the storage and network challenges caused by VMware is infrastructure virtualization. Infrastructure virtualization lets you rapidly change which servers are running which software OSes (including an OS that virtualizes other OSes.) It also allows the user to shift which servers are connected to the IP network and storage network, without needing to make physical machine, cable, LAN connection or SAN access changes. Infrastructure virtualization is provided by companies like Scalent , Egenera (although their VMware support is limited) and Unisys uAdapt. 

When it comes to solving VMware networking issues, infrastructure virtualization solves one of the biggest causes of networking issues at its source: the inability to dynamically allocate virtual hosts. The challenge with VMware and VMotion is that you have to architect the design in advance. Other ESX servers must be up, running and pre-configured to receive the VMs you want to send to them and, as stated earlier, they must also be on the same network sub-net and on the same switch.

Infrastructure virtualization changes this scenario dramatically by giving you the ability to add "ESX on demand" functionality to your virtual infrastructure. You can have a standby server powered off and sitting anywhere in the data center, or the standby server could be used for testing an entirely different application on an entirely different network. Then when the time comes to migrate virtual machines, you simply power the standby server on or point the standby server at your backup ESX image and make the connections to the appropriate OS, in this case VMware, and configure the required IP and storage network connections. Next, reboot the standby server to lock those settings in, and then begin migrating virtual machines. The reboot step can be a possible area of concern, but most VM migrations are done to perform routine maintenance and configuring a standby system on the fly with one reboot prior to VM movement is very simple. The trade off is a very simple, day-to-day network management for a boot time delay in server relocation.

When the need arises to do routine maintenance on the standby server, the standby server can mount an ESX image. When the maintenance on the original server is complete, redeploying the standby server as a test server is merely a matter of re-pointing the server to the test OS image and reestablishing the original network connections.

This means that ESX on demand becomes nearly instantaneous, as changing system function and topology doesn't require touching physical cables or machines. You can rack once, cable once, then reconfigure repeatedly as needed. Failover is automatic, and your data center functionality matches your data center diagram - what you see is what you get.

ABOUT THE AUTHOR: George Crump is President and Founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland he was CTO at one the nations largest storage integrators where he was in charge of technology testing, integration and product selection. 
 

This was last published in July 2008

Dig Deeper on vMotion and Storage vMotion

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close