How to avoid issues when virtualizing business-critical applications

Virtualizing a business-critical application can resolve issues with stability, but lack of preparation before the transition can hamper the process.

This Content Component encountered an error

When you follow best practices when virtualizing business-critical applications, pin down the basics and have the right people handling the process, the transition can be relatively uneventful. But virtualization brings its own set of concerns an enterprise should be aware of before deciding to change how an important application operates.

In this Q&A, we talked with Michael Webster (VCDX-066 and vExpert), who has more than 10 years' experience working with VMware virtualization products and leads the Business Critical Applications Practice Team in VMware's Asia Pacific and Japan Center of Excellence. Webster also handles project management, operational readiness and technical architecture consulting as owner of IT Solutions 2000 in New Zealand. Webster discussed some challenges and use cases for virtualizing mission-critical apps.

When virtualizing a business-critical app, what is still missing?

Michael Webster: Location awareness and application license awareness being built into the platform. Right now, the most complicated part of virtualizing business-critical apps has nothing to do with technology. It's all about the licensing policies. Location awareness comes into play when you want to provision or operate systems active-active over geographic distance and provide disaster avoidance capability, as well as with software licensing.

Can you give an example of a specific project where IT was able to recognize real benefits in performance, uptime, etc., by virtualizing a tier-one app?

Webster: There are literally hundreds of examples, but if I had to pick one, it would be a system I helped a customer virtualize that handles $50 billion worth of financial transactions per year. Downtime on this system could cost upward of $100 million per day.

This was a Unix-to-VMware migration project that was initiated due to an outage caused by the source Unix platform and application, which resulted in a significant financial loss. The customer described the go-live of the virtualization project as the quietest and smoothest deployment they'd ever experienced.

[The] performance [range] of the system increased by a factor of five to 20 times, in addition to significantly reduced costs, and we were able to significantly improve availability much more simply and at a far lower cost.

What are some pitfalls and roadblocks you can run into when undertaking a project like that?

Webster: The biggest mistake I see is when customers try to virtualize business-critical apps [without] doing a thorough enough baselining, planning or testing to ensure they are able to achieve the results they require.

[Another issue] is not starting from a point where the vendor best practices have been applied. Most performance and availability problems and project failures are caused by very simple things that could have easily been prevented, such as reading best practice [documentation] for an application and applying the recommended settings. Most of the time, the information is freely available.

Not having the right people and the right process is also a common mistake. Virtualization isn't black magic; it requires sufficient hardware and to be configured in the appropriate way to meet the business requirements. Not knowing and/or not documenting the business requirements, both functional and nonfunctional, and then not verifying and validating them, [are] really big [mistakes].

There can also be resistance from applications teams and different parts of the organization. To combat that, it is generally best to have them as an integral part of the project team, involved in the design, deployment, testing and go-live processes. They need to be on board to make it a success.

Has the trend of multihypervisor environments, or hypervisor tiering, affected how you approach app virtualization?

Webster: I haven't seen any impact at all from multihypervisor strategies or environments.

Due to the costs, complexity and inefficiencies of managing multiple hypervisors, customers I deal with are still choosing to have a single hypervisor vendor.

Those customers that have used multiple hypervisors -- perhaps one for Linux and a different one for Windows -- are choosing to standardize on a single hypervisor. They realize the costs, complexity and lack of manageability simply outweigh any possible perceived benefits.

Also, if customers choose one hypervisor for dev/test and another for production, then immediately they are introducing risk and having invalid testing. The impact of defects due to invalid testing could far outweigh any possible cost savings. Most customers won't risk a minute of cost savings if it adds additional risks and much greater business impacts.

Moving mission-critical apps

This was first published in August 2013

Dig deeper on Creating and upgrading VMware servers and VMs

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close