When VMware announced Virtual SAN (VSAN) in 2013, the market was just warming up to the whole software-defined...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
concept and the convergence of compute and storage.
VMware already had the compute market cornered with vSphere and decided to expand its portfolio into the storage arena with the vSphere Storage Appliance (VSA). VSA did so well packaged with vSphere Essentials that VMware decided that it was time to take the concept of the VSA product and turn it into an enterprise-class offering where local storage in vSphere hosts could be turned into a highly available shared storage offering.
The goal of VMware VSAN was to bring the storage closer to the compute so that I/O latencies commonly associated with networked storage -- such as Fibre Channel, Internet Small Computer System Interface, Network File System and Common Internet File System -- could be eliminated without compromising on performance. VMware also wanted to eliminate the requirement of having a controller or appliance virtual machine (VM) which managed storage, like the VSA and other vendors, and, as such, set about designing VSAN so that it would integrate tightly with the VMware ESXi kernel and be managed via vCenter Server.
Fast forward two years: VMware has come a long way since VSAN was in public beta; in fact, VSAN 1.0 only went GA in March of 2014. With revenue from vSphere leveling off and projected to drop over the next few years, one might assume that VMware has chosen to concentrate their efforts on VSAN and NSX in the hope that these products will fill the revenue gap. Those assumptions have turned out to be accurate. The amount of effort VMware has put into research and development for both products has become apparent through announcements at VMworld 2015.
Virtual SAN 6.0
In February of 2015, VMware announced that Virtual SAN 6.0 would introduce two deployment models -- hybrid and all-flash. The original mode of operating where the solid-state drive (SSD) is used for write buffering (30%) and read caching (70%), with magnetic hard disk drives used for capacity, is now known as the hybrid model. With the all-flash model, SSDs can now be used to store persistent data, allowing users to deploy a very cheap "all-flash-array," giving them consistently high I/O operations per second (IOPS) with submillisecond latencies. In all-flash mode, the cache tier is 100% write, rendering read caching unnecessary.
With the increase in scalability -- up to 64 nodes -- and in IOPS with the all-flash model, VSAN 6.0 was deemed ready for Tier 1 and Business Critical Applications. In addition, the ability for JBOD support meant that certain blade systems could become VSAN nodes with direct-attached storage.
VMware also introduced the concept of fault domains, allowing users to define failure domains by grouping together multiple hosts within a cluster. This new feature improves resiliency and helps protect against specific failure scenarios that might be highly disruptive, such as a rack, network or power failure. As one can imagine, this announcement generated quite a few discussions concerning whether one could create "site" fault domains in order to create a VSAN that spans two sites. Unfortunately, at the time of the announcement, VSAN did not support high latencies between VSAN nodes, but the issue has since been remedied in VSAN 6.1.
Other improvements included a new file system, which changed from a modified file system based on the VM file system to one based on the Virsto file system and optimized for VSAN, thus improving performance and scalability, and increasing the efficiency of the snapshot/clone functionality.
VSAN 6.1 -- What's New?
At VMworld in August 2015, VMware announced another update to VSAN, a mere six months after its previous update.
VSAN 6.1 boasts a number of exciting new features meant to entice customers away from traditional storage arrays. The first of these features, VSAN Stretched Cluster, allows users to create a stretched cluster between two or more geographically separate sites using the fault domain concept introduced in VSAN 6.0 and promises end users greater security for their VMs across sites.
This is accomplished with the new concept of three fault domains, two of which host data while the third serves as a "witness" site. Similar to EMC VPLEX, the witness acts like a quorum to resolve split brain situations in the event of site failure. VSAN utilizes synchronous replication between sites to keep data in sync between all nodes, and its active-active architecture renders the storage usable at either site. Another new feature, VSAN for remote and branch offices (ROBO), provides end users the ability to deploy a 2-node VSAN at ROBO sites, using the main data center as the witness site. This feature employs the same stretched cluster technique as previously mentioned, but allows multiple 2-node VSAN sites to be created with all sites managed by one vCenter Server.
Additionally, VSAN 6.1 introduced vSphere replication as a response to the new vsanSparse snapshot mechanism introduced with VSAN FS in VSAN 6.0. As such, the recovery point objective is reduced from 15 to five minutes, giving users the option to create a stretched cluster providing sync replication and then use vSphere replication to sync to another site.
Other features introduced in the VSAN 6.1 update include:
- Support for symmetric multiprocessing fault tolerance is now available on VSAN. SMP FT was introduced with vSphere 6.0, but FT in general was not supported on VSAN until now.
- VSAN Management pack for vRealize Operations creates greater visibility across multiple VSAN clusters with Global View, including capacity monitoring, disk usage and SSD wear.
- Support for Windows Server Failover Clustering (WSFC) and Oracle Real Application Cluster. For WSFC, the only supported configurations are those that use the file share quorum witness; Exchange database availability groups using failover cluster instances are not supported.
- Support for new Flash technology -- Intel NVMe and Diablo/SanDisk's ULLtra DIMM -- has also been introduced, providing support for very low latency SSDs.
- Nondestructive file system upgrade. On disk format upgrade from an older version of VSAN file system (VMFS-L) to the latest (VSAN-FS) via Web Client.
One notable omission from the new feature set of VSAN 6.1 is dedupe, which, along with erasure coding, is currently in its beta testing stage. This omission has proven to be a drawback to VSAN 6.1, as customers generally consider dedupe to be a key enterprise-grade storage feature which can help save money.
Why is the VMware VSAN uptake so slow?
At VMworld, I noticed a slide saying that VMware currently has over 2000 customers running VSAN in their environment. While some might consider that a decent number, I think that's pretty low, especially taking into account that, as of this year, over 500,000 customers use vSphere globally.
There are a number of possible explanations to this relatively low number, including the costs of VSAN, and the resistance of IT and storage administrators against change. In regards to the former, VSAN is licensed per central processing unit, meaning it doesn't come cheap -- with a manufacturer's suggested retail price of over $2500, it's double the price of vSphere Standard. Once you start adding up the cost of licensing each host with vSphere and VSAN, you begin to approach the price of an entry-level storage array. As for the resistance, in spite of new and more convenient technologies, it is often the case that end users are more comfortable using and managing traditional storage arrays and as a result are reluctant to make the leap to software-defined storage.
The existing hardward also has to play a role. A customer may have invested a good chunk of money in a network or Fibre Channel infrastructure dedicated to the storage array and they might be reluctant to replace it. When virtualization first started to take off, everyone was told to "virtualize and consolidate" their infrastructure onto blade servers and a large shared storage array. VSAN would obviously reverse this direction on platform technology and revert the customer back to large rack mounted servers with local disks.
What does the future hold?
Sales of legacy SAN and network-attached storage arrays are falling in the face of a pincer movement from converged offerings and cloud storage, indicating that the future is software-defined storage, be it in the cloud or on commodity hardware. The Wikibon Project forecasts a huge growth in server SANs and hyperscale formats (i.e., aggregating many servers and their direct-attached storage into a single logical pool of storage).
That said, VSAN isn't for everyone, nor is it designed to be. However, it signals the beginning of the software-defined revolution, meaning end users will now have to start thinking above and beyond the standard traditional SAN format and about what is a good fit for the environment they need to build and support.
When virtualization was introduced, IT started moving to blade infrastructures because they were a good fit for the environment they needed and supported by a back-end shared storage array. Similarly, as the market shifts towards software-defined, we're witnessing a shift in what hardware should be purchased to underpin the software-defined data center, namely commodity hardware with software control as opposed to the traditional vendor storage array. With infrastructure refreshes occurring, the option of modular storage nodes and converged offerings are becoming far more attractive.
Overall, the advantages and disadvantages of VSAN -- versions 6.0 and 6.1 -- are apparent. On the one hand, VSAN is software-defined, built into kernel and converged, and promises increased flexibility and VM-level granularity. VSAN is supported on blades but only ones that are listed on the HCL (HPs BL460c Gen9 with a D2200sb storage blade or the Dell FX2 platform). The list of possible use cases for VSAN is also compelling, from virtual desktop infrastructure to testing and development, and from disaster recovery to ROBO; ultimately, it is up to customers to decide whether it is worth making the switch from traditional storage arrays to software-defined storage.
The drawbacks to VSAN's all-flash model
The essential guide to VMware VSAN features
VSphere 6 update offers promising technical improvements