|CHAPTER 1: Networking configurations and considerations|
|CHAPTER 2: ESX Server management||CHAPTER 3: Managing storage in an ESX Server environment||CHAPTER 4: Scripts for streamlining VMware ESX|
Basic networking for rack and blade servers
VMware ESX Server provides flexibility for optimizing configurations and virtual networking architectures to meet many different requirements. But configuration flexibility can be a double-edged sword because basic network architectures with VMware can become bewildering in the face of so many options. Thus network administrators should become familiar with basic networking options for VMware on blade and rack servers to optimize their ESX networks.
Tower and rackmount servers require a minimum of five network adapters. Because blade servers and chassis have a limited number of uplink ports from the chassis to the distribution/core switches, network administrators should configure trunking of the uplink ports from the chassis switches and implement 802.1q VLAN tagging with a minimum of 1 GB per second for the network.
System administrators can configure ESX to use multiple Ethernet ports. Both an active and standby configurations should be implemented in case of primary port failure. Also, network administrators should team port configurations with multiple load balancing configurations based on the source port IT, a hash of the source MAC address, and IP-based hash of the source and destination.
To watch for network failures, monitor the link state of the adapter, and use beaconing to look upstream within the network. On the same note, ports can be configured to notify switches in the network that a port has been reconfigured, so the ARP tables are updated; this will minimize other errors.
For a more detailed explanation of virtual switches, physical and virtual NICs and MAC addresses, download chapter five of Virtualization with VMware ESX Server, made available to TechTarget readers by Syngress publishing. This chapter, which covers virtual networking, provides enough detail that "both the beginner and possibly the advanced ESX administrator" will find it useful.
Configure and implement VLANs on VMware VI3
Virtual LANs (VLAN) are not new and most network architects and administrators know the ins and outs of configuring them for traditional infrastructures. But configuring VLANs for using VMware VI3 is a different story. Procedures that worked without virtualization, don't work with virtualization. Thus, before seting up VLANs, network administrators need to know a few things:
- How many physical NICs are required.
- Which VLAN a new virtual server will call home, and
- How VLANs work.
When most networking pros talk about building VLANs with VMware VI3, they are usually referring to VLAN trunks. However, there are three other types of VLAN configurations VI3 uses: virtual switch tagging (VST), external switch tagging (EST) and virtual guest tagging (VGT). VLAN tagging allows for connecting a VLAN directly to a guest virtual machine. Administrators should become familiar with what VST, EST and VGT are and how to use them.
Virtual switch tagging, or VST is usually the best option for a guest VM, but it depends on the individual business's needs. With VST, VLAN trunks are used. The physical switch treats the ESX server's switch like a physical switch, tagging traffic appropriately as it passes across the trunk to the server's NICs. The ESX server then uses the tags to direct the traffic to its port.
EST or VGT can be more appropriate options if your organization's servers plug into distribution layer switches, which connect to a core switch. Here, using VST tagging would be impossible. You would need to use EST tagging.
Additionally, if a particular virtual machine needs to be on several VLANs simultaneously, then VGT makes more sense. You'll need guest OS support for VLAN drivers, and this situation is common in UNIX and UNIX-like operating systems, such as Solaris, OpenBSD and certain Linux distros.
Virtualization expert Rick Vanover discusses why provisioning for networking redundancy for the ESX service console port is important. He suggests having a minimum of two interfaces assigned to the ESX service console port. VirtualCenter 2.5 will warn you if you only have one interface assigned, earlier version of VirtualCenter do not.
The error message, in VirtualCenter2.5, will cause the cluster indicator error symbol to be present indefinitely from the missing interface with ESX 3.01 and 3.02 hosts, and most likely 3.5 hosts as well. This is important because If your virtual servers encounter a new or additional error, you probably wouldn't notice right away. Use a teamed vSwitch for a virtual machine network that does not need redundancy (such as a test network) and reconfigure it on the network and within VirtualCenter to be on the same network as the service console port to resolve this problem.
Disconnected network adapters
If you're making physical-to-virtual migrations with ESX, then it's handy to know that you can configure the virtual server to have its network adapter disconnected at power-on. You'll see be able to see the virtual hardware inventory from the guest operating system, but it will show as if the network was unplugged. With an offline VM, you can configure your IP addressing and DNS information, although you won't be able to test the IP addresses.
This option is useful because in certain cases having a candidate virtual machine on the network and performing its intended tasks too soon can cause a variety of errors, such as duplicate IP addresses, virtual machine applications picking up data simultaneously with another live system, formatting issues from a newer version of the business system feeding results to another system, and so on.
Networking in ESX offers great flexibility, but with flexibility comes room for error. With this tip and the important links scattered throughout you should have a good roadmap on how and why to configure networking for your virtual servers for optimal redundancy, speed, and availability, tailored to your computing environment's specific needs.
This was first published in February 2008