What’s (not so) new in vSphere5 – Part 4 (Storage)

What’s really new is stuff like support for a Software-based FCoE adapter (along the same lines as the iSCSI Software Adapter except it needs the supported NICs to work properly), the implementation of VASA (along side improvements to its big brother VAAI), a new files system in the shape of VMFS5, as well as the release of a Virtual Storage Appliance (VSA)

This Content Component encountered an error

Now, this is a big big section – because there are lots of changes, improvements, and innovations going on here. So much so that I’ve to add one of those “read…” options otherwise the post would take up the whole of the page. What’s really new is stuff like support for a Software-based FCoE adapter (along the same lines as the iSCSI Software Adapter except it needs the supported NICs to work properly), the implementation of VASA (along...

side improvements to its big brother VAAI),  a new files system in the shape of VMFS5, as well as the release of a Virtual Storage Appliance (VSA). Although not included in the core vSphere SKU, and sold separately – VSA is mentioned in the VCP blueprint that inspired this series of posts. In addition to these features VMware also added Datastore Clusters, Storage Profiles and Storage DRS.

 

Phew! A lot to get your teeth into – as well as updating any knowledge gaps in vSphere4.1 as well as refreshing your brain cells on the stuff you thought you knew, but have forgot since your last VCP test!

iSCSI Improvements

VMware now recognise there are in fact three models for delivering iSCSI functionality to the ESX host – with Software iSCSI initiator, hardware based iSCSI-HBA, and also iSCSI enabled-NICs from enabled from the main server BIOS. VMware now clearly state that they prefer a physical separation of the iSCSI storage network, from the rest of the network – but they will of course, support standard VLAN tagging as way of achieving the same result.

ESX 5.0 introduces “iSCSI Port Binding” which allows for true multi-path load-balancing with their Software Initiator. According to my sources (twitter) this has been around since vSphere4.x but what’s new is a fancy GUI front-end). To get this to work you MUST configure the vSwitch  or Portgroups correctly. There’s two ways to achieve it:

Method 1:
Two separate vSwitches – with physically different vmnic – each with a VMKernel Portgroup on it.

Method 2:
One vSwitch backend with multiple vmnic’s. First portgroup (say called IP-Storage1) would use say vmnic1 as its “Active” adapter, and vmnic2 would be marked as an “Unused Adapter”. The Second portgroup would (say called IP-Storage1) would have configure that would be the opposite – with vmnic2 marked as “Active” and vmnic1 marked as an “Unused Adapter”. Basically every VMKernel port has its own dedicated vmnic in either method.

Once the iSCSI Software Adapter has been added to the system (under the new “Add…” option in “Storage Adapters” on the Configuration tab of an ESX host) – under the “Network Configuration” tab:

You can add the VMKernel Ports that are valid for “iSCSI Port Binding”.

TIP: If you not sure if your network configuration is valid for “iSCSI Port Binding” you would find this dialog box above wouldn’t allow you to add the VMK ports. Also on the properties of a portgroup – you should find there’s a piece of descriptive text that states whether “iSCSI Port Binding” is “Enabled”. Finally, on the properties of volume or LUN, you should find that it support the “Manage Paths” option and show more than one channel to the target, with luck the correct plug-in for your type of array will have been selected – in my case the Dell Equallogic SATP:

Software-based FCoE Adapter

ESX 5 introduces support for new software-based FCoE (Fibre-Channel-over-Ethernet) adapter. As with the Software iSCSI adapter (where it is added in the same location) it allows for Ethernet based NICs, to be plugged into a Ethernet switch, and (using the appropriate modules) have that linked to Fibre-Channel network (in UCS parlance they are sometimes referred to as fabric-extenders). The term software-based FCoE adapter is perhaps a little bit misleading because the physical NICs must be supported for this to work, and its not yet a standard that modern server with on-board NICs will have this functionality… yet…

For this to work you must configure only one vmnic per its own dedicate vSwitch – once added and initialised the software adapter will discover the VLAN and Priority Class.

Storage vMotion

Now fully supports “Linked Clones” which you see in products like VMware View and vCloud Director – as well as fully supporting VMs which snapshots attached to them. SvMotion now has new copy engine which creates a “bitmap” of the blocks that have changed during the SvMotion process as well as process that allows the “mirroring of blocks” and “split writes” – so when a VM is “in flight” both the source and destination are update with any changes. SvMotion no longer use change-block tracking typically used by backup vendors – and these new methods of tracking and controlling writes should improve the overall performance of SvMotion.

VAAI (vStorage APIs for Array Integration)

Introduce in vSphere4, vSphere5 introduces three new primitives to the set. The first merely extended the “Copy Offload” primitive to NFS, where previously it was only available to block based storage using VMware VMFS file system. So as such as its not a new feature, more that its better supported than previously releases. The copy offload feature stops the blocks of the VMDK being read-down from the array to the host, and then written-up to a different volume/LUN on the same array. A  good example of this situation is if you have LUN/volume for your “templates” from which you read, and destination LUN to which write when you create a new VM. A lot of process which look like moves are actually copy/delete processes and so copy-off load does have benefits elsewhere.

What’s really new is support for primitive called “Thin Provision” stun. This warns the administrator when a volume is becoming over-committed by more thin-virtual disks than the LUN/Volume has actual capacity for. The intention is to try to avoid an out-of-free space condition.

Additionally, VAAI in vSphere5 includes a “space reclaim” feature for thin virtual disks… Here’s the problem. You have 64GB virtual disk – 10GB is written too, and you write 40GB of temporary data. The temporary data is deleted, but the free space is not. The amount of free space drops from 54GB to 14GB (10+50=60…). In vSphere 5.0 there was probably with this feature on some vendors storage arrays – so the recommendation was to actually turn it off. Since a recent update this has been resolved…

VASA/Storage Profiles

VASA (Sphere Storage APIs for Storage Awareness) apart from adding an even great number of confusing acronyms beginning with a V – VASA is the little brother to VAAI. VASA allows storage vendors to add plug-ins to vCenter – which then reports on the functionality of your storage. Some have their providers as part of a virtual appliance (Dell) or as vCenter-side software that installs as service (NetApp). I’ve written at some detail about them on my old RTFM Education blog.

You can take these basic catagories – and create profiles on them. These profiles then show up in dialog boxes that affect the placement of VMs – say when you create a new VM, clone a VM or clone from a template to create a new VM. It designed to create an advisory system where VM Admins are given guidance on where to place their VMs. You can also create you own user-definitions without the use of VASA.

VASA/Storage Profiles allow you to edit the settings of individual VMs, and indicate where the .VMX and VMDK files are located – and just like with “Host Profiles” there’s a compliance feature – that allows to scan your VMs to see if they comply with the Storage Profile. So in some respect you can see them as kind of “policy” with the only caveat being that they only offer advice – they don’t force a VM Admin to select particular type of storage. So its more an “advisory” system, than a “policy” system.

Datastore Clusters and Storage DRS

Datastore clusters allow you to group individual datastores together to create one cluster of storage. You can see it like a bunch of ESX being stuck inside a VMware DRS cluster – which allows you to aggregate the CPU/Memory resources of the host into a single entity. The same is possible with datastore clusters – and just as with VMware DRS - fundamentally a VM winds up on a specific host on a particular datastore.

The datastores can reside on different arrays – but VMware recommends the datastore within share similar performance characteristics - same disks type, number of spindles (if any), and RAID level. Datastore Clusters are support across all three storage protocols (NFS, iSCSI and FC) and again, its not recommend to mix these types within the same datastore cluster. There are some other caveats:

  • ESX5 only…
  • Don’t mix VMFS and NFS together
  • Don’t mix replicated and non-replicated together
  • Don’t mix VMFS3 and VMFS5 together
Datastore Cluster are part and parcel of Storage DRS which controls where the VM gets placed on power (as VMware DRS does for server compute), and can also move a VMs .VMDK files from one datastore to another.
SDRS has two sets of criteria for initial placement – space and latency to a datastore within a Datastore Cluster. Whereas the decision to move a VM from datastore to another within a datastore cluster is based on latency only. The condition on free space is checked once every 5mins, whereas the condition on latency is checked once every 8hrs by default – both of these intervals are configurable. Turning on SDRS also turns on the Storage IO Control (SOIC) feature (that now supports NFS) that is used to calculate the latency afflicting a VMs VMDK files.
As with VMware DRS there is a Storage Maintenance mode – that can be used to evacuate a datastore of all the VMs, before some kind of maintenance tasks such as removing the data store from the cluster.  As with VMware DRS there affinity and anti-affinity modes – but these differ subtly from those settings. It’s possible to indicate the files of VM are “kept together” to they always reside on the same datastore with in a datastore cluster (affinity) or must be “Be Separate” – there’s also VM-to-VM affinity  and anti-affinity rules where you say  a rule exists to keep two VMs files together or apart.
SDRS is compatible with all the big features such as snapshots, RDMs, and it also supports NFS as a datastore type. However, its not supported on ESX4, and its recommended that if your storage array supports some type of “auto tiering” by which volumes/LUNs can be moved to different tiers of storage (SATA, SAS, SSD) that you only enable the initial placement feature of SDRS, and leave it to the array to balance between cold, warm and hot spots of IO.
Finally, a word of caution. Backup remains a disk intensive activity – and therefore its recommended that use the schedule feature of SDRS to exclude any activity from it during the backup window.
SSD Tag

VMware have introduce a new filter that allows you to see from the vSphere Client if you storage is SSD backed. Non-SSD devices are labelled “Non-SSD” whereas NFS mount-points are labelled “Unknown”.

VMware Virtual Storage Array (VSA)

[Recommendation: Setup and Play before doing your VCP5 exam. I didn't and had to guess the answers some questions!]

VMware have entered the storage game!

They have developed a storage virtual appliance which sits on top of vSphere. Although not part of vSphere and sold as separately product – don’t be surprised if you asked question about it on the VCP. Some of those questions will assume you have set it up and played with it an anger – as they will ask you about various stuff you see in dialog boxes and management windows. You have been warned!

VMware refer to the VSA as “Distributed Array of Storage” in that each ESX host runs a virtual appliance, and has a replica of the data held elsewhere. Should one of the virtual appliance fail, their is copy of the data held on one of the peers in the cluster. VSA is NFS only for the moment, and the way its is setup depends on the number of ESX host you have. The virtual appliance essentially “shares out” the local storage of the ESX host. VMware recommend a RAID1+0 configuration via the local controller.

If you have just two ESX hosts – each runs the appliance – and your vCenter instance acts as the cluster manager and witness to the other nodes. Such that the system can work out if one of the ESX hosts is dead or disconnect from the network (i.e split brain has occurred). In this case the vCenter runs the Cluster Service and Cluster Manager. In a three-node ESX environment – although there is a manager to add the setup and configuration – the vCenter does not get involved in deciding what happens if a virtual appliance fails. The appliance has two NIC for front-end communication to the ESX hosts, and back-end communication to the management system. When imported the VSA will have 24GB RAM, and maximum of 8-VMDK’s. At the moment utilization is especially great but VMware are introducing other RAID levels to the appliance to present more accessible storage with the same protections. With the replica configuration I’ve seen 1.5TB of RAW capacity result in 780GB being accessible to the hosts…

VMFS5

ESX5 introduces a new file system – thrilling labelled “VMFS5″. :-) By far the biggest change is the fact that a VMFS volume can now be greater than 64TB in size, and pass-through RDM’s now support 64TB. On the downside its still the case that the largest .VMDK file remains 2TB in size, despite the fact that many modern guest OSes now support GPT/GUID ways of addressing storage. VMFS gets its own scalability increases from the adoption of that very same GPT/GUID approach to enumerating disk size. Natively formatted VMFS5 volumes have 1MB size, with sub-blocks being supported at 8KB

VMFS3 is still available for mixed environments that contain both ESX4.x and ESX5.x hosts. In terms of upgrading – its possible to upgrade from VMFS3 to VMFS5 without the need to power off or Storage vMotion VMs – but some of the parameters of the original format – such as the block size.

This was first published in July 2012

Dig deeper on VMware Resources

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close