Tip

Virtual servers gain from NFS, virtual NAS clusters

To compensate for planned downtime or peak CPU requirements, instances can move from one host physical server to another with minimal or zero downtime. Such flexibility, however, requires that physical machines can see all virtual disk images, which often leads to an open storage network that uses Network File System (NFS) and virtualized network-attached storage (NAS) clusers.

In the case of traditional block-based storage such as iSCSI and Fibre Channel (FC) storage area networks (SANs), this means being able to assign and manipulate your logical unit numbers (LUNs) so that each LUN can be rapidly reassigned to another physical machine when moving a virtual machine to another physical host. This can be a difficult task to perform not only during initial deployment but also as the environment grows. Assigning a LUN per virtual machine and then exposing that LUN properly to the other physical hosts quickly becomes a scaling issue for IT staff.

In more and more environments, IT administrators are using larger LUNs with multiple virtual machine instances on them. While this eases some of the burden of exposing multiple LUNs to multiple machines, it does not solve issues of partitioning, LUN growth, and especially from being able to serve storage from more than one server host at the same time.

The NFS solution to scaling-up
VMware now supports deploying virtual machines via an NFS boot. Deploying the virtual environment via bootable NFS mounts

Requires Free Membership to View

is an ideal response to this problem of scaling and is becoming more accepted.

NFS is a client/server system that allows users to access files across a network and treat them as if they resided in a local file directory. This is accomplished through the processes of exporting (the process by which an NFS server provides remote clients with access to its files) and mounting (the process by which file systems are made available to the operating system and the user). It is primarily used in Unix-to-Unix file-sharing scenarios. Even if all your VMs are Windows based, NFS is still an option for you. While it is true that Windows can't boot NFS, VMware built NFS into its disk virtualization layer so that Windows does not have to.

NFS-booted workstations are easy to create and manage. It is by definition open. All physical servers have visibility to all virtual disk images, and capabilities such as VMotion are significantly easier to undertake. Instead of creating one LUN per VMware Virtual Disks (VMDK) when using iSCSI or FC SAN, with NFS you can co-locate multiple VMDK files on a single NFS Volume. This makes it possible because VMDKs are files not actual disks.

Why use NFS?
NFS makes life so much easier for the storage and VMware administrator and in many VMware environments there is little, if any, performance penalty. With the exception of storage manufacturers that provide virtualized solutions, LUN management is a challenge for the VMware/storage administrator. With an NFS implementation, the interaction with a single file system makes provisioning additional VMware images easier.

Access control is enabled through the built-in NFS security, allowing provisioning of a NFS file system to a group of VMware managers. With NFS, there is no need to micromanage each LUN. For example VMware images can be grouped in folders by application type and can be provisioned concurrently to a set of applications.

Also, the access path is now over traditional Ethernet which not only drives cost down but also makes troubleshooting easier since most organizations have deeper knowledge of IP management than they have of FC.

One advantage with NFS is simplicity of access. All the ESX servers can get to the single mount point, making the use of VMotion substantially easier. In FC deployments, each ESX server has to see every other ESX server's LUNs. This can be very difficult to configure and manage. NFS is a sharing technology, so all of this shared access is built in.

Another big gain is in data protection. While VMware images being served up through NFS can not use VMware VCB, they can be mounted to a Unix or Linux host for backup. These images can be simply backed up using a backup application that supports NDMP. With the Linux host method simply mounting the NFS volume will provide access to the VMWare images and from there you can mount snapshots and back up the volume. In addition, you can leverage the native replication tools of the NFS Host to provide business continuity and disaster recovery for your VMware environment, instead of purchasing VMware specific replication tools.

To be clear, NFS is not the only protocol and there are some cases where it is not a good fit. Microsoft Cluster Services must have block access, for example, and there are some cases where the raw performance of Fibre Channel is required. iSCSI has a few unique abilities, one being the ability to assign a LUN directly to a Guest OS as opposed to going through the VMware disk virtualization layer. These provide abilities to move specific LUNs quickly out of the VMware environment.

This implementation requires more than just a standard file server or even a standard NAS because, in addition to hosting user data, it will now be running a critical part of the infrastructure.

Virtualized NAS clusters solve I/O problems
By virtualizing NAS clusters, problems associated with physical storage, such as I/O limitations, can be alleviated.

The limitation with traditional NAS appliances is that they are unable to scale effectively with increasingly demanding workloads. Deploying multiple physical servers, each serving up double digit VMware virtual machines (VMs) can quickly burden the I/O bandwidth of the attached storage. These workloads significantly more demanding than those seen in most fileserver environments and to compensate, you have to deploy more NAS appliances which results in NAS sprawl.

This forces multiple NAS systems to address the varying I/O demands of the virtual server environment while placing additional demands on NAS systems to address the file services requirement. With individual NAS heads in place, VMotion becomes difficult if not impossible, the only other option is to keep purchasing larger, single NAS heads. In a VMware environment, this upgrade is being driven to deliver additional performance to the attached ESX servers and not because of capacity constraints of the original NAS.

Enter virtualized NAS clusters. A virtualized NAS cluster presents a single NAS target to the entire ESX environment, even though that single target may be multiple NAS heads. A virtual NAS cluster is a grid of NAS nodes that are managed as a single entity. Scaling for performance or capacity becomes an independent decision. Scaling for additional I/O performance is merely a matter of attaching another node to the cluster, while scaling capacity is merely attaching more disk.

A virtualized NAS cluster also provides a new level of redundancy to the environment. If any node in the cluster fails, the file systems that were assigned to that node automatically fail to other nodes in the cluster. This ability to provide nonstop data access is critical in a virtualized server environment, where the impact of a single failure could impact dozens of virtual machines. Having multiple levels of redundancy is critical for these environments.

Global File System movement
Movement of a virtual server from one physical machine to another can be very compelling and brings much of the data center flexibility that customers are looking for. Movement of the associated virtual disk, especially from one array to another or from a NAS head to another, while not impossible is a time consuming and service interrupting task.

In a virtualized NAS cluster environment, this is simple and nondisruptive. This further enhances the flexibility of the virtualized environment. For example, if a particular physical machine has several VMs that have a peak need for more I/O bandwidth, other virtual machine disk images can be moved off the node providing what is essentially private access to that node for that period of time. This same capability is also available to standard file systems hosted on the virtual NAS cluster, as they can be reallocated as needed.

Virtual NAS and FC: Best of both worlds
In a recent white paper by VMware, FC-based Block I/O is still the raw I/O performance leader. While some NAS suppliers will argue those results, it doesn't hurt to have the best of both.

First, use FC when you have to but do so sparingly. This can come in two different products available in the market today. There are NAS suppliers such as Network Appliance that provide FC and iSCSI services to their NAS heads. The heads essentially create an encapsulated FC LUN on their NAS file system. Then there are the gateway solutions such as those from EMC and OnStor, which allow for native FC access to the storage systems. In the case of EMC, this is of course a gateway into a Clarriion array. OnStor allows you to add a virtualized NAS cluster with a Global Files System to your existing storage via their NAS gateway.

The combination of NFS, clustered NAS, Global File System movement in an environment that is not stuck on one access protocol brings tremendous flexibility to the virtualized infrastructure further improving an infrastructure's ability to scale and respond to the needs of the business while at the same time making the IT administrator's life a whole lot easier.

About the author: George Crump, founder of Storage Switzerland, is an independent storage consultant with over 20 years of experience.
 

This was first published in August 2008

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.