kantver - Fotolia
Systems administrators who are installing VMware vSphere for the first time need to understand there are pros and cons for every change to the default settings. VMware offers many customization options for the configuration, which can be very enticing for new users. But one expert suggests practicing restraint rather than experimenting with various options. Sometimes keeping it simple is the best option, as author Matt Liebowitz explains in his book, VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads.
Liebowitz co-wrote the book with Christopher Kusek and Rynardt Spies to explain how improper installation of vSphere could be the root of performance problems.
How do you avoid overcomplicating your vSphere configuration? And if you do, what's the best way to fix it? Liebowitz talked with SearchVMware to provide some guidance.
What are some signs that someone has made their vSphere infrastructure too complex? What are the first steps they should take to correct it?
Matt Liebowitz: Chances are if someone can't provide a valid business or technical justification for a design element in their infrastructure, then they may be introducing unnecessary complexity. That isn't always true, of course, but generally speaking, the simpler the solution the easier it is to support going forward.
Folks looking to reduce complexity should take a step back and fully understand the requirements of what they're trying to design. Once they map those requirements back to the actual design, they can start to see where there may be unnecessarily complex elements that could be either scaled down or removed completely.
What are some common mistakes users make that overcomplicate the design?
Liebowitz: I think the most common mistake is when IT tries to design something -- be it vSphere, a storage solution, anything -- without fully understanding the requirements of the business. What applications will this vSphere platform support? What are the HA and DR requirements of those applications, and subsequently, the vSphere environment? If IT doesn't know the answers to those questions and designs the solution in a vacuum, they are likely to miss something and either overcomplicate the solution or, potentially worse, not design to meet all of the requirements of the business.
Having a good methodology for design can help someone looking to design a vSphere environment. Figure out what the requirements are, both technical and business, and compare them with the organization's current capabilities. The gap between what you can deliver today and what the requirements of the business are for the future should dictate what elements go into the design.
What should be the main focus when establishing a baseline?
Liebowitz: Ideally, you'll have all resource elements included in your baseline: CPU, memory, network and storage. Different systems/applications will have different resource requirements, so having a good baseline of all resource elements will help you establish predictable behavior in your environment.
What are some possible performance hindrances if you choose a server outside of the current CPU family?
Liebowitz: In the old days, before VMware Enhanced vMotion compatibility (EVC), choosing a CPU that was outside of the current CPU family meant creating isolated clusters in order to support vMotion. Today, that is largely a thing of the past, so mixing servers with different CPU families within the same vSphere cluster is easy to do. That doesn't mean it can't be a risk to performance.
Let's say we have a vSphere cluster that uses a combination of older Intel CPUs and brand new Intel CPUs. Now we'll say that cluster is supporting a virtualized Microsoft Exchange 2013 environment. When you design Exchange, you calculate the megacycles required to support a specific amount of users, and the megacycles are based on the specific processor type in the host. If you have a cluster with mixed CPU families, when that Exchange mailbox server moves from host to host, the difference in processor families will result in a difference in performance. That can make designing a virtualized Exchange environment a challenge. In that situation, designing for the lowest common denominator (in this case, the oldest Intel CPU in the cluster) will give you the most predictable performance as the Exchange VM moves around the cluster.
If users are trying to keep their design simple, should they lean towards scaling up or scaling out? How can admins future-proof their data center?
Liebowitz: I've been in consulting for a long time, so I'm going to hide behind my favorite answer: It depends. It really does all come down to your requirements on whether it makes sense to scale up or scale out. My personal opinion leans more towards scaling out rather than scaling up. I like the idea of adding capacity as needed as it helps you control your investment in new hardware and software and lets you grow into your infrastructure.
The one constant in technology is change, so there really is no way to future-proof a data center in my mind. My advice would be to not fear change, but rather embrace it and implement new technologies that help you solve your business and technical challenges at the right time. You don't always have to be the first to adopt new technologies, but you shouldn't be the last, either.
Using alarms to monitor vSphere performance
Need to improve vSphere performance? Here's how.