This two-part article series will outline a way to provide some redundancy for a file server virtual machine without the use of a SAN for VMFS storage or VMotion. The first part will discuss the initial setup of the file server virtual machine. The second part will discuss a scripted synchronization of the source virtual machine (VM) to another ESX server for manual recovery. Depending on your service up-time requirements, a manual fail-over of a virtual machine without the use of VMware HA could be acceptable. While these articles discuss a scenario with a file server virtual machine, the technique discussed could be used with any server that has a system drive that changes infrequently.
The scenario discussed in these articles used the following components:
- VMware ESX Server 3.0.1(Hosted on two Dell Poweredge 1850 Servers with 60GB of internal SCSI storage and an Intel Pro/1000MT Dual Port Server Adapter for iSCSI traffic)
- VMware Converter 3.0.1 Enterprise
- Nexsan Sataboy 4TB useable RAID 5 Storage Array- Firmware Revision Bi52 (the connection is iSCSI)
- Windows Server 2003 R2
- A Windows XP SP2 desktop machine for VI Client access and powershell or VBscript scripting.
The Nexsan Sataboy had a single 4TB RAID 5 array that was split into two 2TB Volumes, as shown in the images below.
Nexsan Sataboy Dashboard:
Nexsan Sataboy RAID Settings:
Nexsan Sataboy Volume Settings:
Also, the Nexsan Sataboy was configured to use iSCSI as the storage protocol.
Nexsan Sataboy iSCSI settings:
The file server was a Windows Server 2003 R2 virtual machine that had a 15GB virtual disk used as the system volume. The data drives for this file server were provided via an iSCSI connection to a Nexsan Sataboy using the Microsoft iSCSI initiator inside the guest file server virtual machine. This scenario works well for a small to medium environment. The network that the file server is on serves approximately 170 users. The storage device throughput measured through the guest virtual machine using IOMeter was between 30-40 MB/s.
In order to isolate the iSCSI network traffic from the regular network traffic in the guest virtual machine, a new network was added to the VMware ESX server. The vSwitch in that network was tied to a port on the Intel Pro/1000MT Server Adapter NIC. The procedure to accomplish this is in the images below.
First, a new "Virtual Machine" network was added.
Next, a virtual switch was created that was attached to a port on the Intel Pro/1000MT Server Adapter NIC.
Finally, the network was given an appropriate name.
The final network setup can be seen in the image below.
Once the new virtual network was set up, then the guest file server virtual machine was attached to the Nexsan Sataboy volumes using the Microsoft iSCSI initiator version 2.05. In order to make this file server production ready there were a few additional steps needed after the iSCSI disks were connected to it. Sometimes, if the server is rebooted, the shares on the iSCSI-attached volumes may not be recreated. This is because the server service is starting before the iSCSI initiator service. So, the server service does not see the disks to be able to recreate the shares. I had to make sure that all iSCSI connections were persistent, bind all permanent volumes (in this case they were E: and F:), and make the server service dependent on the iSCSI initiator service. These procedures are outlined in Microsoft KB article 870964. At this point, the file server was production-ready. In part two of this article, I will discuss how you can use p2vtool.exe provided with VMware Converter 3.0.1 Enterprise to script a backup of the main file server VM to another ESX server.
About the author: Harley Stagner has a wide range of knowledge in many areas of the IT field, including network design and administration, scripting and troubleshooting. Of particular interest to Harley is virtualization technology. He was the technical editor for Chris Wolf and Erick M. Halter's book Virtualization: From Desktop to the Enterprise and currently writes his own blog at www.harleystagner.com.