Problem solve Get help with specific problems with your technologies, process and projects.

Reboot that vSphere design to maximize its potential

For a VMware shop that has been using vSphere for several years, there are several main areas that may benefit from an overhaul to save the IT staff from unnecessary administrative tasks.

The idea of a vSphere design modernization is something that I came up with in 2013 as a way to describe the common...

tasks I was performing for many of my customers, such as updating or verifying VMware designs or recommending a new design for a greenfield project.

Many of my customers either started or dramatically expanded their VMware footprint in the ESX 3.5 and vSphere 4.x days. When these environments were built, they followed the vSphere design practices of that era. Over time, these customers have done in-place upgrades of hosts or built new hosts to follow the same designs. But these customers haven't updated their vSphere design for updated practices and to take advantage of newer features.

To elaborate on some of the architecture items that have aged, I will explain a few of them with details in networking, clusters and storage -- the major architecture parts of a vSphere environment. By considering modern design choices, administrators may find they'll improve how the environment operates and they may also reduce the effort needed to support the infrastructure.

Networking

The virtual networking is a much-neglected piece of many older vSphere designs. Admins typically follow what was done around networking from the original design. The two major parts here are the type of virtual switch used and the physical uplink design.

With the virtual switch design, I see a large number of customers that continue to use the vSphere Standard Switch (VSS). This is usually not due to a licensing restriction, but to a comfort level or old habits. In each release of vSphere, the vSphere Distributed Switch (VDS) has taken made tremendous gains with enhanced features. I feel there are not many -- if any -- valid reasons to not use a VDS in most designs today.

Another thing I have seen is that customers will adopt VDS -- but then treat them like VSS. An example of this is they use VDS but each cluster has its own VDS or multiple VDSes. They might have a management VDS, a VDS for virtual machines and one for storage. This goes against the idea of VDS by using multiple switches when a single VDS could accommodate the requirements and reduce complexity.

Some of these older configurations are also due to the way network uplinks were configured for vSphere hosts. In the earlier VMware days, 1 GbE networking was the fastest option and you would commonly see hosts with two, four or more 1 GbE connections. An administrator would group these connections into pairs and use them to separate different types of traffic. These uplink pairs would be matched to virtual switches -- this is where the multiswitch designs originated. I have seen environments -- even after moving to 10 GbE networking -- still using multiple uplinks solely for the purpose of traffic separation. Even if they are using only 1% of the uplink capacity, they still continue to rely on the legacy architecture.

In a modern vSphere design, I prefer to see a customer use a single VDS with a pair of 10 GbE uplink connections. This configuration helps control different traffic types through the load-balancing options in the virtual switch. We also have Network I/O Control, or NIOC, available on the VDS as another way to control and limit different traffic types. These collapsed designs make it easier to support architecture that is more than capable of supporting the needs of the majority of environments.

Clusters

The clustering part of vSphere has also matured greatly since the days of 3.5 and 4.x. Long gone are the HA design limitations of clusters. In addition, the improvements in CPU scheduling and resource management have improved over time, which should have resulted in higher consolidation ratios for many enterprises.

Unfortunately, the truth is there are still a lot of clusters in the world that are running at 20% to 30% capacity. By not properly managing performance or seeking a higher consolidation ratio, I see clusters with more hosts than are required. I think there is a large opportunity for customers to address this issue, and this could result in needing fewer hosts and savings in licensing costs.

While there are certainly some design requirements that warrant the use of separate clusters for workloads, I see too much use of separate clusters for applications or projects. Some of this might be due to the way businesses fund projects and purchase hardware, but there are greater savings to be realized with larger vSphere clusters.

I'm not saying everyone should only build huge clusters, but rather than building several three- or four-node clusters, an administrator should consider looking at eight- or 16-node clusters. How would this affect your ability to manage resources? You will also gain savings by having fewer clusters you don't need.

There will be different constraints that may change this in some cases. Things like database licensing or the need for a DMZ cluster may drive the need for a few smaller clusters. But let these be the exception rather than the norm.

Storage

Much like the previous topics, data-store sizing is a common item that does not get altered much as many configurations are leftovers from the past. In the early days of vSphere, you would see smaller data stores that might be for a single VM or a handful of VMs. This was due to older array technology and some data-store size limits from that time period. These restrictions have been gone for several years now, and the ability to have fewer, larger data stores should be the norm.

There can be some design requirements that drive the data-store sizes for certain VMs or clusters. There are some factors, such as how groups of VMs are replicated or specific performance that can drive exceptions for a subset of VMs rather than the entire environment. For the majority of your VMs, an updated data store size starting at 2 TB is a much friendlier story. With the limit of 64 TB for a data store now, you can find a happy medium between being too conservative and the maximum size.

Just ensure that you are meeting your capacity and performance requirements. It will be much easier to monitor and maintain a handful of data stores rather than a number of smaller ones that all are nearly full due to their small size.

This was last published in November 2014

Essential Guide

Stay connected with tips and trends in vSphere networking

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close