Despite its prevalence in the data center, virtualization can still be a dirty word in some environments. There...
are many times those of us in IT are told "The application can't run virtually" or "My application needs dedicated resources. I want a physical server."
Here's the reality: If an application can run on x86, it can be run virtually. You can even run a 1:1 ratio if the virtual machine (VM) needs that much memory and CPU, and still get the added benefits of virtualization.
Vendors need a virtual shakeup
We live in a world where some vendors need to catch up. They give physical server requirements for an application -- which can greatly differ from its needs when run from within a VM. I believe the major gap is really not knowing what the application needs in the specific environment. When given generic requirements for 16 GB of RAM and 32 CPUs, internal application owners will insist on those specifications.
As the infrastructure administrator, you can build that VM and show the application is not using all of those resources. In fact, the extra CPUs are hurting the VM. How can that be? More is better, right? Even as hypervisors improve, they have to schedule the CPUs to run; a higher CPU count means more CPUs need to be scheduled to run. If the application only needs 12 CPUs, it will run much better and have less chance of experiencing high CPU ready times. Usually anything above 5% will impact performance; 5% represents the time the VM is waiting to run.
Don't let the past issues slow progress
There may be some resistance to virtualizing applications due to a previous poor experience. I'm sure there are some lingering unpleasant memories associated with virtualizing databases in the early days. There were a lot of complaints about slowness and contention back on vSphere 4.0. But VMware has come a long way with vSphere 5.5 and now with version 6. I still remember being in awe when VMware showed the 1 million IOPs on vSphere 5.1 at VMworld 2011.
In vSphere 6, VMs can be configured with 128 vCPUs, 12 TB of RAM and 64 TB of disk. That's quite a beast. I have found if best practices are followed for the application or database, the overhead of virtualization is insignificant. Problems arise with overcommitment on memory, CPU and oversized VMs or bad queries. Throwing more hardware at an issue is a bad way to correct an inefficient application or a bad query.
Ease their fears with proof
How do you convince your internal customer they can run an application virtually? Start with test or development, then mimic production requirements and let them test. Set a baseline of performance expectations for queue length, memory usage, pages/sec and so on, using performance monitor for a SQL database or a tool like Iometer for storage metrics. It is easier to add vCPU and RAM rather than take it away later. Right-sizing not only provides efficiency for the VM but saves money by allowing you to deploy more VMs. Virtual doesn't mean free.
Speed and flexibility are the payoffs
The same issue has been holding back VDI. It was hot a few years ago, but many deployments failed when organizations thought VDI would save money. Performance was poor because VDI was run on slower tier storage. Consequently, the end-user experience was horrible. These pains still linger, so many organizations continue to avoid it.
Yes, you initially have to invest in the infrastructure but the premise of VDI is to save time on management. Things such as deploying and patching are much faster. Sensitive information can be saved and handled at the server level versus someone's desktop or laptop. A basic VM can be spun up for contractor use in a matter of seconds. Vendors in this area, especially storage appliances, have made great gains in performance and consolidation using dedupe, compression and so on. You can easily deploy a virtual desktop infrastructure using VMware's VSAN.
There is a VMware View and VSAN whitepaper for sizing best practices and a reference architecture document to guide you. They describe using VMware View Planner so an engineer can simulate VDI workloads which are typically CPU-bound and sensitive to I/O.
Embrace the virtualization challenge. SAP HANA workloads can run virtually. Hadoop can as well. A successful virtualization effort is all in VM sizing and proper configuration. Keep VMs on one NUMA node to access local memory, block alignment on storage. There is virtually no excuse now. We have the tools to show how well the application can run. The VMware community is large and helpful. If you can't find documentation or a blog on your issue, reach out on Twitter or the VMware community forums. Someone is always happy to assist in your virtual journey.
Find out how virtual applications really work
Is application virtualization always the best way to deliver apps to users?