Command-line tools are critical to mass-configure hosts while ensuring consistent configuration among VMware servers. Now that vSphere is available, command-line options can vary, so VMware administrators need some command-line chops
This tip takes you through commands for ESXi 4 to configure hosts for standalone use via the free license and script pre-configuration tasks for the servers before being assigned to vCenter Server for management.
Enable command-line interface on ESXi 4
As in ESXi 3, the command-line interface (CLI) is not accessible in ESXi 4 unless you know how to enable and access it. ESXi default installations start on the vmkernel screen, shown below in Figure 1.
The F2 and F12 options allow configuration of basic networking and system events, but they don't allow us to do everything. To enable the local console prompt, press Alt-F1, type "unsupported" and press Enter. You will then be prompted for the root password and subsequently placed into the local console of the ESXi host, as Figure 2 shows.
Now you can run commands through a tool such as Hewlett-Packard Co.'s Integrated Lights-Out or Dell Remote Access Controller (DRAC) management interfaces, or enable Secure Shell (SSH) for the ESXi host. Refer to the following link for information on how to enable SSH on an ESXi host.
Configuring virtual switches with esxcfg-vswitch
The good news is that the base functionality for this command is backward-compatible with ESXi version 3. So any scripts that create standard virtual switches you have already created for ESXi version 3 will work in both environments, which is particularly helpful when performing an in-place upgrade on the same hardware. There are, however, a number of new parameters for the command, most of which involve supporting the new Nexus 1000V virtual switch.
There are two primary new parameters with the esxcfg-vswitch command that do not apply to the Nexus 1000V, namely the –x and –X parameters that display the maximum number of uplinks for the switch and configure the maximum number of uplinks for the switch, respectively. This is the number of interfaces assigned to the vSwitch as the number of vmnics, not the number of ports on the virtual switch.
Most scripts written for ESX 3.x and ESXi 3.x will translate fine to vSphere if you don't use Nexus 1000V virtual switch. Check out the following link on how to script the creation of your virtual network for more info.
If you do opt to use the Nexus 1000V virtual switch, new options with the esxcfg-vswitch command are available for the DV Port.
Multipathing change of heart with esxcfg-mpath and esxcli
While the virtual switch command was nicely analogous to the previous version, the multipath command interface is different in vSphere. I have used the esxcfg-mpath command for two main tasks: obtaining logical unit number (LUN) serial numbers from virtualized storage and to setting multipath policy via scripted interface.
There are three multipath policies for use in Virtual Machine File System (VMFS)-based shared storage (iSCSI, local, Fibre Channel) on ESXi: most recently used, fixed, and round robin. I frequently change the default of fixed or most recently used to round-robin when multipath input/output (I/O) is an option on shared storage.
VMware vSphere takes round-robin out of experimental mode, and it can now be configured by the esxcfg-mpath command. For ESX/ESXi version 3 servers, the following command would change a LUN to a round-robin multipath policy:
esxcfg-mpath --policy=rr --lun=vmhba2:0:1
In vSphere, however, the esxcfg-mpath command is not very helpful. To perform the same multipath policy configuration on an ESXi 4 system, we need to introduce the esxcli commands. Refreshingly, esxcli is very word-driven. The esxcli command space for multipathing is straightforward. The following command will list the multipath policy for all volumes:
esxcli nmp device list
Figure 3 below shows this output on one ESXi 4 host with one local VMFS volume and one iSCSI VMFS volume, with the policy highlighted in yellow:
To change the policy on the iSCSI LUNs to round-robin, we need to know the long name of the device. The long name of the LUN can be found in the first line of the section containing the path in question, highlighted in green in the example above.
The following lines would convert both entries to round robin for the LUNs in question:
esxcli nmp device setpolicy --device t10.F405E46494C45400155716660743D2D6753583D203054496 --psp VMW_PSP_RR esxcli nmp device setpolicy --device t10.F405E46494C45400969407E61726D2A6457586D2633477E4 --psp VMW_PSP_RR
Once these commands are accepted, the configuration on the VMFS volumes has been changed to round robin. Figure 4 shows this configuration:
Round robin policy is more appropriate as a standard setting for Fbre Channel storage with VMFS volumes. The iSCSI example shows the command syntax. There are many options with the esxcli command. You can set a policy specifying the number of bytes or I/O operations that are a threshold to make the storage driver go to the next path, for example. Check out the vSphere CLI reference document on the VMware website for more information.
Enable iSCSI storage and scan for disk
We can configure the ESXi 4 host to enable iSCSI storage and scan for disk through the command line. This can be useful as part of a post-installation script, along with configuring network interfaces and virtual switches. The following commands will enable the iSCSI initiator and then perform a scan:
esxcfg-swiscsi –e esxcfg-swiscsi –s
This process is shown on the console in Figure 5 below:
Once this command completes, the storage adapter is configured on the ESXi server.
VSphere and ESXi are similar enough to be familiar, yet many parts of the configuration are different and require some planning and testing before administrators are fully ready to move to the new platform.
Rick Vanover (MCTS, MCSA) is a systems administrator for Safelite AutoGlass in Columbus, Ohio. Vanover has more than 12 years of IT experience. His areas of interest include virtualization, Windows-based server administration and system hardware.
This was first published in August 2009