Manage Learn to apply best practices and optimize your operations.

What server-savvy VMware admins need to know about virtualized storage

With vSphere 5.1, VMware made the tool truly storage-protocol agnostic, but that's not enough for virtualization 2.0. Storage is still a bottleneck.

Server virtualization is an established way of life for VMware administrators, so the fall meeting of the New England VMware User Group stepped outside members' comfort zones with two keynote presentations on storage.

Storage is increasingly a focal point in VMware's vSphere toolset, according to NEVMUG attendee David Hughes, a senior storage engineer for Commonwealth Financial Services. Fittingly, the event opened with a discussion of storage in vSphere 5. Other discussions covered a vision for the future data center, with virtualized storage and apps that pick and choose resources without roadblocks. Here are a few of some notable comments that came out of NEVMUG.

When planning a storage architecture, think about what you're going to be doing -- i.e., changing or developing applications

VMware Virtual Machine File System (VMFS) 5 can support up to 64 terabytes of data store. This vSphere version levels the playing field for different storage architectures -- VMFS, Fibre Channel -- in terms of protocols. VMware also freed up usable storage capacity with vSphere 5's Storage Appliance (VSA), allowing RAID 5 and RAID 6.

Erin Banks, EMC senior vSpecialist presenting "Storage Best Practices," pointed out that Fibre Channel and NFS are the most-used storage protocols, according to multiple surveys, but the majority of survey respondents use multiple storage protocols in their VMware environment. VMware added NFS support to Storage I/O Control (SIOC) in vSphere 5.

VMware has been poo-pooing iSCi for years, and that seems to have all changed now

As EMC's Banks described, VMware is making a push to include all storage protocols with vSphere 5, covering all the basic essentials to allow applications to run agnostic of storage protocol. One audience member noted, and EMC's presenters did not refute, VMware's previous reticence with regard to certain storage systems.

The goal is to have a bevy of resources and capabilities available and allow architects to make storage protocol choices based on the finer points of application/protocol compatibility. VMware administrators and storage administrators should also have equal visibility into the infrastructure, and speak the same language with vSphere's new storage application programming interfaces (APIs).

Why wouldn't you have the storage system do a lot of the work for virtualization? SIOC is like network throttling for your storage environment

SIOC works with individual VMs, kicking in when a data store exceeds predefined latency thresholds. As Jason Marques, EMC senior vSpecialist and co-presenter, suggests, SIOC throttles I/O queuing to the data store, like a VM-level version of network throttling. SIOC kicks off when the data store is back to normal latency.

With SIOC, storage administrators can use vSphere 5 to prioritize I/O, and throttle I/O and bandwidth when they deem it necessary. For example, a company can execute high-priority workloads without interference from "noisy" operations (I/O-heavy but non-critical workloads).

Vendors will tell you to do an entire stack with them, but the truth is no vendor knows what's going on through the whole stack either

Fixing problems is harder with virtualization because of unstable I/O patterns, disparate interdependencies with low visibility, and mobility. Virtualization creates a massively transient world, and whether or not your data center is full of the same vendor's logo, you'll find problem solving harder than it was with a purely physical data center, said Steve Duplessie, founder and senior analyst at Enterprise Strategy Group (ESG), an IT analysis firm in Milford, Mass.

Some of these problems will change in virtualization 2.0, which was the focus of Duplessie's presentation, "The Storage Landscape & Virtualization -- Hype and Reality." Virtualization 1.0 was about making one physical box support six complete VMs; virtualization 2.0 is about making six boxes look like one to an application. Storage virtualization will eliminate physical dependencies that cause problems, such as one storage array being overworked while the one next to it sits idle, Duplessie explained. Grid computing would allow apps to view the whole data center as available resources from which to pick and choose.

Storage is moving backward

After considering the mobility and flexibility of processing power in virtualized servers, Duplessie lamented the choke points of storage: 1,000 servers will be networked to storage resource by merely two access controllers. Duplessie sees this setup in 90% of virtualized infrastructures.

Virtualizing the storage controller will eliminate this choke point, and some vendors are catching on to the possibility. Virtual storage controllers are emerging at the storage layer, but why not at other parts of the stack? Duplessie makes a case for storage controllers at the virtual machine disk file (VMDK) level.

In the future, the data center will be one container, with an "über orchestration layer," as Duplessie described it. The question is how do you accomplish this, making everything in the data center look like one thing? VMware administrators need to consider the best and least painful ways to accomplish this next phase of virtualization, which may include transitioning from an internal data center to cloud computing -- or something in between.

Dig Deeper on DRS and DPM

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.