Looking for something else?
Generally, a successful virtualization administrator creates a VMware lab at home -- but it's a bigger project than you might think. In this edition of Virtualization Viewpoints, you'll get a detailed outline on how to create a VMware lab that's capable of supporting vSphere's more advanced features -- and how to build it on a budget. This section covers the requirements for building a VMware lab at home.
Home labs are a great way for virtualization professionals to gain valuable, hands-on experience outside of the office. Once virtualization administrators are removed from fragile production environments, they can explore the technology and boost their skills.
A few months ago, I wrote an article about my quest to build a VMware lab at home using a desktop PC and VMware Workstation. The project was a great success, exceeding my expectations and allowing me to configure many ESX and ESXi hosts, which run as virtual machines (VMs) under VMware Workstation. The desktop PC that I used had the following specifications:
- An Intel i7-920 2.66 GHz, quad-core processor with hyperthreading;
- 12 GB of DDR3-1600 memory;
- a 1TB, 7200 RPM SATA hard drive;
- Windows 7 Professional; and
- VMware Workstation 7.
When I wrote the article, I had not purchased a shared-storage device for my VMware lab. I suggested a few, including the Iomega ix2-200 and ix4-200d, as well as the Data Robotics DroboPro. After weighing a variety of factors -- such as cost, performance, size and features -- I chose the ix4-200d for several reasons:
- Cost. My budget was about $700. The DroboPro was not even close to that price range, and I was able to find an ix4-200d on sale at Fry's Home Electronics for $649.
- Performance. The ix4-200d supports RAID 5 and has more memory (512 MB) and a faster processor (1.2 GHz) than the ix2-200 (RAID 1, 1.0 GHz processor, 256 MB memory). The four-drive. RAID 5 configuration on the ix4-200d allows for more spindles to write to than the ix2-200, with RAID 1 and RAID 2 drives.
- Size. The ix4-200d supports up to 8 TB, using four 2 TB drives. The ix2-200, on the other hand, supports up to 4 TB, using two 2 TB drives. I wanted at least 4 TB, and 1 TB drives are the most affordable. Factoring in the RAID penalty, the ix4-200d configured with four 1TB drives provides less than 3 TB of usable disk space, where the ix2-200 with two 1 TB drives would provide less than 1 TB of usable disk space.
- Features. The ix2-200 and ix4-200d use almost identical software and have the same features. For me, support for iSCSI and Network File System (NFS) clients, are the most important features -- which they both had.
I chose the ix4-200d over the ix2-200. It's a great, little unit that's very feature rich. It provides good performance and is very affordable. After this purchase, my home lab was all set, and it serves me well. The i7-920 processor provides blazing performance, with its eight CPU cores. It also easily handles several VMs running at once.
Not too long after I bought the i7-920 processor, the i7-930 came down in price, so it was about the same price as the i7-920. They are basically the same processor, but the i7-930 has a 2.80 GHz clock speed, compared to the 2.66 GHz clock speed of the i7-920. If you are looking to build a new system, however, I recommend using the i7-930.
Even though this type of VMware lab works well for many people, I wanted to use some of the advanced features that are not available when running ESX and ESXi as VMs. This includes VMDirectPath, Fault Tolerance, Distributed Power Management, and Dynamic Voltage and Frequency Scaling.
With this in mind, I decided to expand my VMware lab at home with some servers that I could install ESX/ESXi on without running VMware Workstation. This setup requires designating specific servers as ESX/ESXi hosts, but it has some advantages and disadvantages when compared to using a single desktop computer and VMware Workstation.
VMware Workstation is a cheaper option because one desktop can run numerous ESX and/or ESXi hosts. VMware Workstation can also be used as a conventional Workstation, and of course, the administration and management of multiple hosts is easier if they are contained on a single PC.
If I used Workstation instead of a dedicated server, however, I wouldn't be able to play around with VMware's higher-end features, such as VMDirectPath and Fault Tolerance. I'd also suffer a performance penalty because running ESX/ESXi in its bare-metal form provides much better performance.
Because bare-metal ESX/ESXi servers require certain hardware to operate properly, I had to figure out what my ESX/ESXi host needed so it performed the way I wanted it to. I looked for something that had the following specs:
- at least 8 GB of memory, expandable to at least 16 GB;
- an Intel-based, quad-core CPU that supported Fault Tolerance;
- a small local disk with at least 250 GB of space;
- at least three network interface cards (NICs) that were supported by vSphere; and
- a storage controller that was supported by vSphere.
I chose Intel because, as I mentioned earlier, I wanted to use VMDirectPath, which requires either Intel Virtualization Technology for Directed I/O (VT-d) or AMD I/O Memory Management Unit (IOMMU) technology. Intel VT-d has been out for a while and is common in many servers, but AMD's IOMMU is not widely available. I didn't need that much local disk space because I planned on using shared storage for the servers. A RAID controller is nice to have in a VMware lab, but wasn't a requirement for me.
In the next section, I focus on how to choose the right hardware and the different options available. Stay tuned!
Eric Siebert is a 25-year IT veteran with experience in programming, networking, telecom and systems administration. He is a guru-status moderator on the VMware community VMTN forums and maintains VMware-land.com, a VI3 information site.