Although 80% - 95% of organizations are using server virtualization today either in production or in the planning stage, overall, the industry average for the percent of servers virtualized is only 15% - 20%. Everybody's virtualizing, but most still have only completed phase I – the low-hanging fruit. For most, this first phase has been about consolidating the easy-to-virtualize servers, like Web servers and file and print servers. Servers running custom applications are also high on the list, since they are supported in-house and thus have no vendor support or licensing issues.
In the earlier days of virtualization, most stayed away from virtualizing database servers, largely due to input/output (I/O) and performance concerns. These days more users report success in this area for a variety of reasons which I'll discuss later. Still, as organizations move on to virtualizing the next wave of servers and applications, there are a number of common barriers they face.
Lack of budget
In our recent Focus Virtualization Survey (Virtualization Management: User Survey Report) of over 250 companies, the number one barrier to expanding server virtualization to include more applications and servers is budget. This is interesting, since this same set of users reported that virtualization had, in fact, reduced the total cost of ownership (TCO) of their servers and increased the return on investment (ROI). But even with good financial results, budget has been an inhibitor to expansion.
This is an area where getting the right help to build a solid business case can help dramatically. There are good TCO and ROI tools available to help here, as well as good people resources to help. Talk with your virtualization vendors, server hardware vendors and resellers about getting help building a business case for phase II. Most of them have both the tools and the resources to do the cost-justification.
Virtualization support from the application vendor
The next most common barrier listed by users was that an application or vendor doesn't support virtualization. Despite the now widespread use of virtualization and the growth to the next tier of applications, there are still a surprising number of vendors who do not officially support their applications running as virtual machines. In many cases, users are virtualizing these applications anyway.
Our advice here is twofold: first, keep beating on your vendors to get with the program (it will only be through customer pressure that these vendors will agree). Second, if you decide to do it anyway, make sure you have good virtual-to-physical (V2P) conversion tools at the ready. Everyone has experience using physical-to-virtual (P2V) conversion tools to get their servers and apps converted from physical to virtual, but people often forget about needing to go back from virtual to physical. If you do have a problem with a vendor that doesn't support its app in a virtual environment, you'll need to V2P the app and recreate the problem.
Performance concerns came next, with 30% reporting it as a barrier. This generally takes several forms. Performance has been a concern particularly with around database and I/O issues. The good news here is that as the virtualization platforms have matured, the overhead with regards to I/O has gone down significantly. In addition, there are new chipset enhancements that work with the new software releases to lower the overhead even more. Specifically, improvements in vSphere with DirectPath directly address this issue, and VMware is reporting dramatic performance improvements in I/O. Clearly these improvements vary by application and should be tested in your own environment, but with the architectural change in the way I/O is handled, there can be huge improvements in performance. This can also result in huge increases in consolidation ratios, which, by the way, will make your business case stronger, both for upgrading to vSphere and to expanding to the next wave of applications.
Next tier of applications are harder to virtualize
The next highest response is somewhat of a catch-all for a variety of issues. This sometimes means technical issues and sometimes the issues are more people-related. One common complication is from multi-tier applications that involve multiple servers and services working together. This adds complexity to the management of these applications and requires more sophisticated processes and products. For vSphere, vApps are based on a concept of grouping virtual machines (VMs) together and dealing with them as an entity.
Harder to virtualize can also mean resistance from the line of business owners. Some of these folks raise concerns about performance and troubleshooting if their apps are virtualized or have concerns about service-level agreements (SLAs). The good news here is that there are new tools and new features which have been added to existing tools to help with managing these harder-to-virtualize applications. These tools are coming from hypervisor vendors (like AppSpeed from VMware), startups (like Vizioncore, Akorri, Netuitive and Embotics), and from enterprise systems management vendors becoming virtualization-aware (BMC, CA and EMC).
The last of the top five barriers is, not surprisingly, storage issues. This is consistent with the top two pain points of virtualization implementation, which are backup and storage (by a wide margin). Server virtualization has a major effect on backup and storage, and unfortunately, many organizations have implemented virtualization without the full involvement of the storage team. Virtualization and consolidation change the game for backup, since multiple virtual servers share the same physical server and I/O bandwidth. Still, most shops (if they actually do backups of all their virtual servers, which most don't) continue to perform backups the way they always have, with a backup agent in each VM.
There are clearly better ways, but the server virtualization staff and the storage staff need to work together, look at the latest tools (checkout the new vSphere vStorage API and look at which backup products leverage all the features), and create a backup strategy that will scale as you expand to the next wave of servers/apps.
In addition, since VMotion/live migration requires some form of shared storage across hosts, leveraging the advanced management capabilities (which require live migration) requires a shift from direct attached storage (DAS) to network attached storage (NAS) or Storage Area Networks (SAN). Without the proper planning, assessments, architectures, and tools (like thin provisioning and deduplication), this can both drive up the costs (moving from less expensive storage to the most expensive tier-1 storage), and can create performance bottlenecks and security challenges if not everything is properly implemented. Again, success moving forward requires collaboration between the storage and server teams, leveraging the expertise of both.
The good news for organizations facing these issues is that there is a whole ecosystem of vendors and solutions working to address many of these challenges. Take the time to ensure that you have the right products, processes and policies in place before expanding to phase II and beyond.
Barb Goldworm is president and chief analyst of Focus. Barb has spent over 30 years in technical, marketing, sales, senior management and industry analyst roles with IBM, StorageTek, Novell, EMA and multiple successful startups. Barb is Virtualization Chair for Interop, Blade Systems Insight and DataCenter Insights, and serves on several advisory boards on virtualization and cloud computing. She has authored hundreds of articles, business and technical white papers and research studies, in addition to her book "Blade Servers and Virtualization."