Problem solve Get help with specific problems with your technologies, process and projects.

Physical network design options for VMware Infrastructure 3 environments

To benefit more fully from VMware ESX, you need a network design that addresses network bottlenecks. This tip reviews networking options and their impact on a physical network.

VMware ESX offers outstanding support for a wide variety of networking configurations. Users have the option of...

using network interface card (NIC) teaming with or without physical switch support; numerous VLAN configurations, three of which are described in more detail in VST, EST and VGT tagging tips; support for both active and standby NICs, including per-port group active/standby NICs; and jumbo frame support. With all these options, it can be daunting to find the right configuration for your environment. In this article, we take a closer look at some network design decisions and how they play into the physical side of the network.

Why uplinks are significant

The best place to start this discussion is with the connection between the virtual switch environment and the physical switch environment: the uplinks. When a user configures a vSwitch in VMware Infrastructure 3, or VI3, the vSwitch can be connected to the physical network through one or more NICs designated as uplinks. Users have the option of simply adding multiple NICs as uplinks, without any physical switch support required, or they can aggregate those links using 802.3ad/static Link Aggregation Control Protocol or a proprietary equivalent (such as Cisco Systems' Gigabit EtherChannel).

Each of these configurations requires different settings on the vSwitch.

  • When using link aggregation, the vSwitch must be set to "Route based on IP hash."
  • When not using link aggregation, the vSwitch may be set to any of the other settings; the default setting of "Route based on originating virtual port ID" is considered the best in most cases

These configurations behave differently and will place traffic on the various links differently, depending upon the type of traffic. For example, when using link aggregation, the nature of the link aggregation algorithms involved mandate that there must be multiple source-destination IP address pairs involved in order for more than one link in the group to be used. Without multiple source-destination IP address pairs, only a single link in the group will be utilized. For traffic types that are primarily point-to-point, like VMotion, the use of link aggregation will provide very little benefit, if any at all. For other traffic types where there are multiple source-destination IP address pairs, like virtual machine traffic, link aggregation may result in a more even distribution of traffic across multiple uplinks.

It's important to note, however, that the use of link aggregation will not result in more bandwidth available to a single source-destination IP address pair. It will increase overall aggregate bandwidth, but not bandwidth for individual connections.

Possible drawback: Redundancy

Link aggregation also has a potential drawback as well: redundancy. While link aggregation can provide redundancy to protect against the failure of a single member of the group, all of those links must typically connect to the same physical switch. That physical switch represents a single point of failure. Thus, if a switch fails, the entire group will go down. There are some newer switches that support cross-stack link aggregation, eliminating this potential problem, but those switches are the exception and not the rule. This will be a major factor in considering the use of link aggregation in the VMware ESX network design.

When not using link aggregation, organizations can provide switch redundancy by using multiple uplinks and connecting those uplinks to different physical switches. What organizations gain in redundancy, however, they may lose in utilization. In some configurations, only a single uplink will be used even when multiple uplinks are present. This is especially true for VMkernel ports. One way to get a little more from the uplinks without providing redundancy is using customized NIC failover orders for each port group.

For example, placing a service console port group on a vSwitch with two uplinks will generally result in only a single uplink being utilized. The second uplink will remain idle, only becoming used if the first uplink fails. By combining another type of traffic on the same vSwitch and setting the NIC failover order for that port group to preferentially use the second NIC, users can make better use of multiple uplinks on a vSwitch without sacrificing redundancy and fault tolerance.

As they design the optimal network configuration for VMware ESX implementations, organizations have many choices at their disposal. Physical switch support for cross-switch link aggregation, network traffic patterns, link redundancy, and NIC utilization all need to be considered and included in the network design. Otherwise, organizations could end up creating network designs that are inefficient and that introduce traffic bottlenecks.

About the author:
Scott Lowe is a senior engineer for ePlus Technology, Inc. He has a broad range of experience, specializing in enterprise technologies such as storage area networks, server virtualization, directory services, and interoperability.

This was last published in October 2008

Dig Deeper on VMware and networking

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.