Andres Rodriguez - Fotolia
When VMware launched its take on a Hyper Converged Infrastructure Appliance, its partners and customers were really enthusiastic at what they showed: A 2U, 4-node all-in-one offering comprising compute, network, storage and virtualization that was simple to deploy and manage. Sounds perfect, right?
The demo of EVO:RAIL Manager at VMworld 2014 showed it took 15 minutes to deploy a working VMware environment, all done via a simple wizard, a few IP addresses and a click of a button. VMware boasted easy lifecycle management, simple to carry out nondisruptive upgrades/patches to the VMware infrastructure and simple to scale out.
But as the months rolled on and people started to digest the information being made available on EVO:RAIL, it seemed that the enthusiasm to jump on board the bandwagon waned. There were limitations, queries around licensing, questions on how you scale the offering and limitations to the maximum configuration. Not to mention that the Qualified EVO:RAIL Partners (QEPs) were slow to release their products to the market with a lack of any viable information about what their offering provides to customers.
Many within VMware realized that EVO:RAIL wasn't packaged in a way that its partners could sell, or even in a package that customers would be willing to buy.
Customers complained about the bundling of licenses with the EVO:RAIL appliance as many preferred the flexibility to use their own licenses (for example if they had an ELA). VMware listened and created the VMware Loyalty Program, which allows customers to apply existing vSphere Enterprise Plus licenses to their EVO:RAIL appliance (eight CPU licenses are required per appliance, and the licenses must be valid and supported).
Limitations to the original cluster maximum of four appliances (16 nodes) was lifted with a recent software update (version 1.2) which now allows up to eight appliances in a cluster. This means a fully populated cluster with 32 nodes -- which happens to be the cluster maximum for vSphere 5.5 -- could potentially support 1600 general purpose VMs or 2400 virtual desktops.
An announcement in June 2015 also changed the appliance hardware requirements, allowing support of processors with different core counts (dual, six, eight, 10 or 12) and the ability for each node to have between 128 GB to 512 GB of memory. In addition, there are now two VSAN options, 1x 400 GB SSD and 3x 1.2 TB HDD or 1x 800 GB SSD and 5x 1.2 TB HDD -- which offers about 13 TB or 22 TB of usable storage per appliance.
But it's still not enough because some customers are still not convinced. There are still some questions that first need to be answered.
- Why does the license have to be Enterprise Plus when EVO:RAIL doesn't use distributed vSwitches, Storage DRS, Host Profiles or Auto Deploy? Surely standard licenses would suffice.
- Why can you only scale by adding another appliance with four nodes? Why can't you start with fewer than four nodes?
- Why can't you add more disks to the VSAN? Why can you only add four disks (or six for new software revision), of which one is the SSD for caching? VSAN is capable of supporting five disk groups per host with eight disks per group (of which one is the SSD).
What VMware can do to make it right
In my honest opinion, there is nothing technically wrong with EVO:RAIL, but it doesn't make sense when you look at the licensing and scalability.
VMware tries to position EVO:RAIL as a midmarket play, yet it is pricing it for enterprise customers. An enterprise customer could purchase a SAN storage and individual servers rather than appliances with local storage as it's too restrictive for the enterprise market. Plus a midmarket customer doesn't have the budget to fork out $150,000-300,000 for EVO:RAIL.
Also with VMware preaching "Software Defined XYZ" it would make far better sense if they offered EVO:RAIL as software that could run on any commodity hardware (and not just a QEP product). VMware might be better off just selling EVO:RAIL as an automation software package.
The hardware restrictions and scalability aspects are handcuffing customers who don't want to be stuck with a single hardware vendor, or stuck with appliances that can only be purchased or upgraded in multiples of four nodes. If a customer just exceeded the VM count of a single appliance, they would have to purchase a whole appliance with four nodes, rather than just an additional node. It would make more sense if they offered an appliance with three nodes to start (general rule-of-thumb for vSphere host clustering) and the ability to scale by adding single nodes.
At VMworld 2015, it was announced that version 2.0 of EVO:RAIL would support vSphere 6 with the cluster now scaling to vSphere 6 maximums -- 64 nodes (within 16 appliances). This now means each appliance is designed to run up to 800 VMs (or 12,800 VMs per cluster of 16 appliances). In addition, the hardware requirements for each appliance have been expanded to support any Intel E5 CPU (Ivy Bridge or Haswell), between 512 GB and 2048 GB memory, and 13 TB or 22 TB usable hybrid storage (HDD).
While version 2.0 is not yet available, I do hope they look at rolling in vSphere 6 update 1 and VSAN 6.1 at the release date. However, is there enough to encourage customers to buy into EVO:RAIL? In my opinion there are still features I would like to see:
- The option for an all flash EVO:RAIL -- considering VSAN 6.0 supports all-flash disk groups.
- Ability to tier storage, so mixing VSAN disk groups to provide all SAS or all SSD and the ability of EVO:RAIL Manager to use Storage DRS/Storage Profiles to move VMs across the different tiers.
- Support for VSAN 6.1, and the ability to use the Stretched VSAN cluster feature to allow you to deploy one appliance in Site A and one appliance in Site B (with a nestled witness). This will give customers a cheap DR option and allow them to build a better business case in spending all that money on EVO:RAIL appliances.
- An initial three node EVO:RAIL appliance in order to bring the entry-point down in price (like what Nutanix has done). More midmarket users purchase 3x ESXi hosts rather than four -- and that's mainly because they have less workload requirements. It's easier to justify three nodes for 60 VMs than four nodes.
- The ability to expand by single nodes or half an appliance (two node). Let's face it, asking midmarket customers to fork out for four nodes every time they wish to expand their cluster is not commercially viable.
- The ability to connect to FC storage (local VSAN just doesn't provide enough storage), or maybe even allow JBOD support, like in VSAN 6.0.
- Full vROps integration to give the full monitoring and reporting capabilities.
- EVO:RAIL Horizon Manager (which automates the deployment of a VDI infrastructure) instead of how QEPs only provide View licenses in their EVO:RAIL bundles. If they can automate vSphere/vCenter, then the next obvious option is to automate the deployment of a VDI infrastructure, especially since they are targeting VDI as a good use case.
Further on down the line, I would like to see the EVO:RAIL software packaged as a flavor of vSphere, which customers could purchase (per CPU) and simply drop onto any server they own.
The pros and cons of VMware EVO:RAIL