Generally, successful VMware administrators create home labs – but it's a bigger project than you might think....
In this edition of Virtualization Viewpoints, expert Eric Siebert details the cost of building his virtual test lab, which is capable of running vSphere's more advanced features.
Check out the previous articles of this series for a more detailed look at the requirements and considerations for building advanced VMware home labs, as well as how to choose the hardware for a VMware home lab.
Once I had determined my VMware home lab's requirements and chose the hardware, the hardest part of this part of the home lab journey was over.
In the end, I ordered two ML 110 G6 servers, six 2 GB HP DIMMs and two Intel PRO/1000 PT adapters.
|2||HP S-Buy ML110 G6 X3430 NHP SATA Server (57-8929-005)||$560||$1,120|
|6||2 GB DDR3 PC3-10600 DIMMs (500670-B21)||$72||$432|
|2||Intel PRO/1000 PT dual-port gigabit adapter||$120||$240|
So far, I am really pleased with the HP ML110 G6 servers. They support the advanced features that I wanted, including Dynamic Voltage and Frequency Scaling power management. They are small (Micro ATX tower cases), very quiet, and draw very little power (95W CPU, 300W power supply). In addition, they have a dedicated LO100i management adapter, which is not as robust as some of the more expensive models. But it still has the basic functionality.
You can connect to the LO100i with a Web browser to manage the server power, hardware and event logs. To get more additional functionality, you have to purchase the advanced license, which adds a remote virtual Kernel-based Virtual Machine console and virtual media. The advanced license, however, retails for $229 -- which may be a high price to pay for convenience.
The ML110 G6 also has internal and external USB ports for booting from an ESXi flash drive, a trusted platform module security chip slot, and a DVD-RW drive. I easily installed vSphere from a CD, as well as verified that the hardware was recognized and the advanced vSphere features worked OK.
In the BIOS, I need to enable support for VMDirectPath (VT-d) because it's disabled by default. The server fully supports Intelligent Platform Management Interface, as you can see from the Health Sensors report in the vSphere Client.
While playing with ESX and ESXi running on virtual machines (VMs) is cool, nothing beats running it bare metal on server hardware, and the ML110 G6 provides that experience at a very reasonable cost.
A word on networking: I already had a Linksys E2000 wireless N router that has four gigabit wired ports, but I wanted to add a gigabit-managed switch to connect everything together. I looked at low-cost, managed switches and found the Linksys (now Cisco branded) SLM2008 eight-port gigabit switch from the small business line. It was relatively inexpensive (about $110) and supported some advanced networking features that I could use with vSphere, such as jumbo frames and 802.1Q virtual local area network tagging.
The eight ports could handle all the NICs (eight total, including the LO100i) in my two ML110 G6 servers. While it met my needs, allow much room for additional network devices (e.g., shared storage). One way around this, however, is to directly connect some of the NICs to each other, instead of connecting them to the switch.
I could create a vSwitch on each host for FT logging and VMkernel traffic, for example, and simply use a standard network cable to connect the NIC of the one host to the second host's NIC. A crossover cable is not needed because most NICs detect this automatically and adjust on their own. For networking cables, there is no better place than MonoPrice.com, where you can buy 14 ft. Cat. 6 network cables for less than $3.
In the immediate future, I want to add another shared storage device to give me more options with my vSphere lab. The Iomega storage device has been great, but I want to add a different storage device for some diversity. The Drobo models were too expensive for me, because iSCSI/Network File System (NFS) storage support is limited to the DroboPro.
The new Drobo FS model is more affordable but doesn't have native NFS/iSCSI support. There is an add-on NFS DroboApp available for it, however. The Netgear ReadyNAS units look nice, and I was seriously considering buying one, until a company I had not previously heard of caught my eye.
Synology makes a whole line of network storage devices -- from small, one-drive units to four-drive units that can be expanded. All of Synology's units are rich in features and support both iSCSI and NFS protocols. I was very impressed with the product line and will order the DS410 model soon. Many of the models are diskless, so you can add whatever size/speed drives that you'd like.
I'll also order another eight-port SLM2008 switch, or buy a 16-port switch, for future expansion. I might also add another SATA local drive in case I want to do more locally.
You can buy a 500 GB hard drive for around $50, so it's not expensive to get more local storage. If 4 GB DIMMs come down in price, I may upgrade the RAM to the maximum of 16 GB, but I doubt this will happen anytime soon. Finally, I may add a third server for more options when using features, such as FT, DRS and High Availability.
I plan on using my new ML110 G6 servers in combination with ESX/ESXi hosts running on virtual machines on VMware Workstation. It's really easy to create hosts on Workstation, and when I want to play around, I can spin up a VM quickly and delete it when I'm done. Having both physical bare metal servers running ESX and ESXi in combination with running them on Workstation gives you lots of flexibility.
Once you begin setting up your own home lab, you'll likely find it's addicting and you'll want to continue developing it much farther than you initially imagined. My journey hasn't ended yet, although this phase of it has.
Eric Siebert is a 25-year IT veteran with experience in programming, networking, telecom and systems administration. He is a guru-status moderator on the VMware community VMTN forums and maintains VMware-land.com, a VI3 information site.
Key tips: Building a VMware lab setup for testing at home
Best practices for a VMWare test lab