VMware ESX Server and VMware Infrastructure 3 offer network and systems administrators several options for creating...
networking configurations. But as always, flexibility brings complexity. To keep it simple, this tip offers recommendations for basic networking options when using ESX and VMotion.
Networking configuration for the ESX Host
At a minimum, a single network adapter is needed to install ESX on a server but in a typical data center environment at least two NICs would be the absolute minimum configuration for redundancy. Among other things, the network adapter is required for the ESX console operating system to communicate with the external network. From there, additional network adapters would be used for other options and configurations based upon the server hardware being used (e.g., rack mount or blade servers as described below).
Tower and rackmount servers
When using a tower or rackmount-style server, the best practice configuration would require a minimum of five network adapters. For proper separation of network traffic, three networks would be used:
- Management network for the ESX console operating system (1 NIC)
- VMKernel network(s) – Required to use VMotion and/or iSCSI and NFS
- VMotion requires a 1 GB/sec network interface minimum, and best practice is to have it operate on a separate network to minimize latency and network congestion for VMotion execution. (1 NIC)
- If using iSCSI, a virtual NIC for the Kernel must be created to use iSCSI (best practice is to use 2 NICs for redundancy). iSCSI should be on a physically separate network to guarantee bandwidth, eliminate contention, and ensure security.
- NFS can be used for access to virtual machine datastores. If using NFS, another network port must be allocated for its use.
- Virtual Machine Network interface (at least two NICs for redundancy)
Blade servers, generally configured with integrated chassis switches, will have a limit to the number of network adapters available for each blade to connect to the chassis switches. Depending on the blade system vendor, that limit may go up to eight NICs/HBAs (early blade systems were generally limited to two or four).
Also, because blade servers and chassis have a limited number of uplink ports from the chassis to the distribution/core switches, the network administrator should configure trunking of the uplink ports from the chassis switches and implement 802.1q VLANs tagging, preferably with a minimum of a 1 GB/sec network. This will allow ESX to provide a logical separation of various types of network traffic, while having sufficient bandwidth to run all operations efficiently.
I recommend a minimum of 1 GB/sec network if using ESX in a blade server environment – this will minimize network bandwidth contention for the host and guest operating systems out to the enterprise. For high speed storage requirements, I also recommend implementing a Fibre Channel storage network solution versus iSCSI (unless there is a 10 GB/sec iSCSI implementation in place), as there is limited bandwidth coming into the chassis for data connections for all host and guest operating systems, and in this shared environment, I/O can become the bottleneck. Running an iSCSI storage infrastructure will decrease the amount of network bandwidth available for data connections and can impact performance for all systems in the chassis.
ESX NIC management
The ESX host can be configured to use the multiple Ethernet ports in a number of ways. Ports can be configured in an active/standby configuration, where ESX will detect if a physical port connection is dropped and leverage a secondary port that is configured. Multiple ports can be configured as standby ports. This can allow an administrator to use single uplink connections for the non-critical components, like the console operating system, but have a port available for the server to fail to in case of a primary port failure.
Additionally, ports can be configured in a teaming configuration with multiple load balancing configurations (load balancing is used for outbound traffic only). The three load balancing configurations are based upon the source port ID, hash of the source MAC address, as well as an IP-based hash of the source and destination (refer to VMware documentation for detailed explanations). Failures can be detected by monitoring the link state of the adapter as well as using beaconing to look upstream within the network for a failure. The ports can also be used to notify switches in the network that a port has been reconfigured, so the ARP tables are updated to minimize network errors.
With these types of options, ESX provides flexibility to optimize configurations and virtual networking architecture to meet many differing requirements. Depending on your requirements and hardware constraints, these options all you to configure your ESX environment for the most robust and functional solution that your budget will allow.
About the author: Craig A. Newell is a senior consultant at Focus Consulting (www.focusonsystems.com) in Boulder, Colo. He helps end users evaluate technology needs concerning virtualization, server consolidation and blade systems. Newell is a certified project management professional, a certified wireless network administrator, and a certified business continuity planner and served as a technical editor for the book Blade Servers and Virtualization: Transforming Enterprise Computing While Cutting Costs.