Just joining us? Check out our previous article for a more detailed look at the requirements and considerations...
for building advanced VMware home labs.
After I established my home lab requirements (which you can read about in part one of this series), I had to figure out which hardware to use. I considered three hardware options:
- building white boxes, as I did with my desktop PC;
- using brand-name, pre-built desktop PCs; or
- using brand-name, pre-built servers.
I wanted the total price of the server under $1,000, so cost was an important factor -- which is why I first looked into building a white box PC. With a whitebox, I can also choose my exact components, and they are typically cheaper than brand name PCs. But the biggest factor ended up being feature support.
I decided to use dedicated servers because certain vSphere features, such as Fault Tolerance and VMDirectPath, have very specific hardware requirements. Only a very select group of processors, for example, supports Fault Tolerance. Also, figuring out if white box hardware is compatible with these features can be difficult and frustrating. Ultimately, I ruled out white box PCs because the cost savings were not as great as the available brand-name hardware options.
Next, I looked at brand-name, desktop-class PCs from manufacturers, including Hewlett-Packard (HP), IBM and Dell. Determining if those machines supported specific vSphere features was still challenging because VMware's Hardware Compatibility Guide (HGC) lists only servers and not desktop-class PCs.
Additionally, most desktops have features that you would never use in servers, such as high-end video cards, pre-installed Windows OSes, media card readers and DVD-RW drives. As a result, when you turn a PC into an ESX or ESXi host server, you pay for unnecessary components. Therefore, I eliminated desktop PCs from my list of virtual host candidates.
Choosing a server model
My last option was brand-name servers. Having mostly worked with HP servers, I started my search by looking at HP's low-cost ML servers, which is HP's non-rackmount server line. The ML150 G6 was the cheapest server on the vSphere HCG, and it had an Intel Xeon 55xx series processor, which is compatible with Fault Tolerance. The ML150 is at the high end of the ML server line, and it starts at $799 for the basic model without additional memory or network interface cards (NICs).
Next, I reviewed the ML110 G6 servers, which start at $469 and come with a variety of processor options, such as the Intel i3-530, Intel i3-540, Xeon 3430 and Xeon 3440.
The Xeon 3400 processor series supports Fault Tolerance, but the i3 series does not. So my decision came down to the 3430 and 3440 processors. Both are quad-core processors, but the 3440 supports hyperthreading, so it will appear as eight logical CPUs to my host. Also, the 3440 was slightly faster at 2.53 GHz, compared to the 3430, which runs at 2.40 GHz.
The 3440 model was only $60 more, and it included a bigger hard drive (250 GB instead of the 160 GB hard drive on the 3430 model). As a result, I decided on the ML110 G6 server with the 3440 processor.
Then, I had to make sure that vSphere supported the embedded storage adapter and NIC. If vSphere does not have the proper driver, it won't see the storage or network.
The ML110 uses a B110i SATA RAID controller and an NC107i gigabit network adapter. Both were listed in the I/O compatibility guide, so I was OK there. One thing to note, however: While the B110i supports SATA RAID, it does not support Virtual Machine File System volumes. Therefore, you need to create a non-RAID volume or add a Serial-attached SCSI controller and drives, such as the HP Smart Array P212.
The ML110 G6 with the 3440 processor starts at $609, but I found it cheaper from a reseller for about $560.
Choosing additional hardware options
The base ML110 server included 2 GB of memory and only one NIC (there is an additional NIC, but it is dedicated to the integrated Lights Out adapter). I wanted 8 GB of memory and three NICs, and I wanted to use HP-branded components.
The ML110 has four dual in-line memory module (DIMM) slots and a maximum RAM of 16 GB. It comes with one 2 GB DDR3 DIMM, so I would need three more 2 GB DIMMs to reach 8 GB. I considered using 4 GB DIMMs to get 16 GB of memory, but they were more than double the cost of the 2 GB DIMMs. Instead, I ordered three more 2 GB DDR3 PC3-10600 memory DIMMs (500670-B21) for about $72 each.
I still needed two more, vSphere-compatible NICs because the ML110 has four expansion slots (three PCI Express and one PCI). So I started looking at dual-port NICs. I wanted PCI Express (PCIe) NICs -- which are faster and cost more than PCI NICs. I found the HP NC360T dual-port gigabit PCIe NIC for about $225, which seemed expensive. So I shopped around.
I found the best deals on Intel dual-port NICs. Intel makes a wide variety of NICs, but its most popular gigabit NICs are the PRO/1000 series, which appear on the vSphere HCG. You can get the dual-port PRO/1000 in a PCIe or PCI-X format (PT model and MT model, respectively). You can plug the PCI-X adapters into the ML110's single PCI slot, but it would operate at a slower bus speed (e.g., 32-bit, 33 MHz). As a result, the MT model wouldn't perform well as a network adapter in the PCIe slot.
While the PCI-X and PCI formats are compatible with each other, you can only plug PCIe cards into PCIe slots. If you want to save money, however, the PRO/1000 MT model is about half the price of the PRO/1000 PT version. I opted for the PRO/1000 PT model because I didn't want to cut corners in this area.
For me, the 250 GB hard drive was big enough, but HP doesn't offer bigger drives with its pre-configured, smart-buy servers. You can purchase additional drives for it (up to three more). For what HP was charging -- $250 for a 250GB SATA drive and $419 for a 750 GB SATA drive -- I would have definitely bought it from Micro Center, where you can get a 1 TB drive for about $99.
In the final installment of this series, I focus on the building costs for VMware home labs. Stay tuned!
Eric Siebert is a 25-year IT veteran with experience in programming, networking, telecom and systems administration. He is a guru-status moderator on the VMware community VMTN forums and maintains VMware-land.com, a VI3 information site.