VMware ESX network redundancy via Fibre Channel multipathing

With VMware ESX Fibre Channel multipathing, you can improve network redundancy to protect against path failure. This tip details ways to access multipathing controls and multipathing policies.

VMware ESX has built-in support for Fibre Channel multipathing, which enables network redundancy. There are several

ways to access multipathing controls, and three different multipathing policies. I'll explain the details of each in this tip and explain how to view host bus adapter (HBA) utilization.

Understanding Fibre Channel multipathing
Most Fibre Channel-based storage area networks (SANs) are built so that there isn't a single point of failure. This means that hosts have redundant Fibre Channel HBAs, the fabric itself contains redundant Fibre Channel switches and the storage system has multiple Fibre Channel target ports and/or redundant storage processors.

As a result of this design, hosts have multiple ways of reaching a logical unit number (LUN). Without multipathing support the host would have no way of knowing that these multiple paths all represent the same LUN, and instead it would show the LUN multiple times. With multipathing support, the LUN is seen only once, and the host knows that it has multiple ways of reaching that LUN. If the event one path fails, the host can use an alternate path. ESX has supported Fibre Channel multipathing since version 2.0.

Accessing multipathing controls via VMware Infrastructure Client
There are two ways to see and change the multipathing settings: via the VMware Infrastructure (VI) Client or by the Service console using the esxcfg-mpath command.

For most users, viewing and setting multipathing information is best handled with the VI Client. In the VI Client path information can be viewed in a couple of different areas, both of which are accessed on the Configuration tab of an ESX host. There are then two separate places to view multipathing information:

  1. Select Storage Adapters, then select a specific adapter and right-click on one of the detected LUNs. You can then access the Manage Paths context menu item. This will let the user set the multipathing policy, the preferred path, and which paths should be active or inactive. (Shown below.)

  2. Select Storage, then right-click on a configured datastore -- not a network file system (NFS) data store, since multipathing for NFS is handled very differently -- and select the Properties contextual menu item. From here, a user can see all aspects of a particular data store including, in the lower right-hand corner, an option to manage paths.

Accessing multipathing controls via the Service console
Using the Service console, run the esxcfg-mpath command. Like many of the other esxcfg-* commands, esxcfg-mpath uses the "-l" (lowercase L) to list current multipathing information. A sample of the esxcfg-mpath output is shown below.

Using esxcfg-mpath, we can change the multipathing policy (Fixed, Most Recently Used or Round Robin), enable or disable a particular path, or set the preferred path. Just typing "esxcfg-mpath" will show a listing of the various command line options and provide a couple of examples as well.

Once you know how to access the multipathing settings, you need to know what they mean. Enabling or disabling a path is pretty straight-forward, as is setting a preferred path, but the multipathing policies can be a bit more complex.

Fixed - When a data store is set to use the fixed policy, traffic travels down only the preferred path, unless it is unavailable, at which time it will fail over to a secondary active path. When the preferred path returns, traffic will resume traveling on that path. This is the default multipathing policy for storage arrays with active/active storage processors.

Most Recently Used (MRU) - Similar to a fixed policy, when designated as MRU, ESX will use a path until that path is no longer available. When the first path returns, the MRU policy will not switch back; it will continue to use the same path until it is no longer available. MRU is the default policy for storage arrays with active/passive storage processors.

Round Robin -- The Round Robin multipathing policy is officially listed as experimental; it will load-balance the I/O requests across all available paths.

Unfortunately, it's not possible to view the utilization of individual paths within ESX. This is because these paths represent connections between hosts, Fibre Channel switches, and Fibre Channel target ports on the storage array. To view path utilization, a user would need to gather data from the FC HBA in the host, the FC switches in the fabric, and the FC target ports on the storage array, and correlate the data. Users can, however, get a real-time view of HBA utilization using the built-in esxtop utility.

To view HBA utilization, run "esxtop" and then press the "d" key to switch to viewing HBA utilization. To switch to disk device utilization, press the "u" key.

ABOUT THE AUTHOR: Scott Lowe began working professionally in the technology field in 1994 and has since held the roles of an instructor, technical trainer, server/network administrator, systems engineer, IT manager, and CTO. For the last few years, Scott has worked as a senior systems engineer with a reseller, providing technology solutions to enterprise customers.

This was first published in July 2008

Dig deeper on VMware and networking

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close