It’s been a while since I played with the latest plug-ins from NetApp. Yesterday, I was very fortunate to have one of NetApps specialists (Luke Reed) come up to my collocation – and do a controller and OnTap update to my FAS2040s. So I’m now cooking on OnTap8, with access VAAI.
Things have moved on the plug-in side of things from NetApp since I looked them this time last year. Back then the plug-ins were each a separate install – now they are bundled all together in a single installer – and NetApp have adopted the Virtual Storage Console as the moniker to cover a whole range of functionality. To some degree this shows in the plug-in with each bit of functionality having its own independent controller configuration (where you type the name/ip/username/password) of your NetApp system – rather than doing this centrally from one location.
Some of this stuff is not new, and some of it is new. But this is the kind of stuff you can do with the NetApp VSC:
- Enhanced Storage Views – above and beyond the standard VMware vSphere Storage Views
- Rapid Cloning for VMware View, Citrix XenDesktop
- Backup and Recovery
- VSC CLI
What’s new here to me is the addition of the ability to create, mount and revert – NetApp Snapshots of VMs. This used to be called “Snap Manager for Virtual Infrastructure” – and had a separate (not together pretty) UI. This has now been integrated as right-click option within vCenter once NetApp VSC
The other thing that’s new to me, is the VSC CLI. That’s right, the plug-in that adds new GUI enhancements to the vSphere Client, has its own CLI. That might strike you as a bit odd, but I do see value there for folks who want to use the vCLI/PowerCLI to script events in VMware, whilst at the same time scripting storage events too… without the need to PuTTy into the NetApp Filer and run commands there.
Installing & Post-Configuration
Installing the NetApp VSC is an easy affair. I decided to run it as service on my vCenter Server. The NetApp is primarily a server-based plug-in, rather than client plug-in. I rather like this because you install it once, not N times for the number of management systems you have. Once installed (a next-next) affair it will open a webpage which will allow you to register it with your vCenter.
URL here & screen grab.
Once that is completed – you will find that you have a Netapp icon in the Solutions & Application area of vCenter.
This will give you access to each of the 3 configuration panes associated with the NetApp VSC. Each of these need configuring with the details of the NetApp Storage Arrays you ESX hosts are accessing.
The Virtual Storage Console actually uses a discovery process to locate the NetApp systems – by interrogating the storage configuration of the ESX host. This did take sometime mainly because I have a number of storage systems that all reside on the same 172.168.x.x network. Sadly, the VSC picked up on my EMC Celerra, and tried (and failed) to add that to the list. It wasn’t the end of the world – I just had to right-click and remove. That’s not unusual for these plug-ins – for example the EMC VSI sent back some errors on some NFS mount points, because it couldn’t identify them – what it didn’t realize was those NFS mounts were coming from NetApp not EMC Celerra. So there’s a message there. These plug-ins are specific to a vendor, and if you work in a multiple storage vendor/protocol environment like I do (I have NetApp, EMC, Dell, HP Lefthand) then expect some oddness.
Once the NetApps are discovered you can save your credentials against them, and it comes with handy link to NetApp FilerView, and the plug-in knows to pass on those credentials. The red alarms you're seeing here are caused by the NetApp being very full… and the fact I haven’t plugged-in the second redundant power-supply. As for the ESX hosts – they haven’t properly optimized for use with NFS.
Under Storage Details – NAS is where the important stuff is. This will show your NFS volumes, datastore capacity, NFS pathname, Permissions, Capacity of the Aggregate, and also the Host Access. I’ve shown this in the screen grab below, because I remember rightly – the previous VSC didn’t have these “View” options under “Host Privileges”
The “Data Collection” option allows you to collect data – essentially log files from either a Controller, ESX host, Brocade switch, Cisco switch, McDATA switch, QLogic switch and Export VSC Logs. It’s much too exciting to put a screen grab here. Seriously, though I do understand that log collection is important for troubleshooting, and especially if you engaged with an official support channel. The “Tools” section contains access to utilities that allow you to re-align the virtual disks of your VMs to be correct – as well as offering the opportunity to change SCSI IO Timeout’s within a guest operating system – Linux, Windows and Solaris are all supported. Finally, the “Discovery Status” just enumerates various stats such as how many hosts, controllers, LUNs and so have been found…
Provisioning & Cloning
Of course another common task with vSphere and Storage is provisioning new datastores and cloning VMs. That’s something that’s been around for a while from NetApp and its plug-ins. Before the VSC we had the RCU – the Rapid Cloning Utility. It was a bit of an unfortunate name, as many folks were quick to dismiss it as a cloning utility only, when it could actually provision new NFS volumes – and then mount them to all the ESX hosts. That saves time – no laborious going through the NFS wizard – doing a typo… watching it fail and starting all over again – and beats the pants of having to write a PowerCLI script to do the same. So just as with the EMC VSI, NetApp’s VSC can create the volume, share it out, allocate the hosts access all in a couple of clicks. I guess what both EMC & NetApp are offering in their cloning technologies – is an easy way for VMware Admins to consume the new VAAI features together with the vendors own version of “linked clones”. As with the Backup & Recovery part of the VSC you do have to “register” your NetApp array with it. Personally, I prefer this manual process as it seems to be quicker than using IP-based discovery.
When you add in the control it will return back its network interfaces, volumes and aggregates that it owns. Using the field picker its possible to filter what the “user” can see. I think this is rather neat – because it means I need not necessarily worry too much about getting the storage guys to create users, groups and assign permissions… I can do some of that filtering with the plug-in.
Note: Be a bit careful here. By default the VSC adds ALL the resources available to field picker on the right. To filter you use the << arrows to remove items from the list. So in my case I’m only allowing access to the eOa interface, the volumes, and aggregate1. AggrO needs to be avoided as that’s the ‘system’ location where your configuration is stored on the NetApp Array.
Of course once enabled you will get a “Provision datastore” option on the right-click of a cluster, and a “Create rapid clones” option on the right-click of a VM. There are icons on the toolbar as well
but between you and me – as I advanced further into my dotage I tend not remember icons very well, but I do remember menus and labels.
If you select the “Provision datastore” you will be asked to select your NetApp Filer(s), and whether you want a NFS volume or VMFS backed by either iSCSI or Fibre-channel (remember you do need to be licensed for these technologies on the NetApp Filer, for those to work). After selecting that you will be able to define the parameters for your datastore. Notice how I can’t select AggrO because that wasn’t included in the “Resources” part of the configuration wizard – and how I can set “advanced options” like Thin provision and Auto-grow.
This provisioning process is pretty good – and it looks like NetApp have done some work to tidy up the permissions process. Previously, it was impossible to interrogate the ESX hosts to find out which VMkernel ports are being used for NFS storage (you could find out if a vmkernel port was used for management with ESXi, management for HA, VMotion or FT). So the storage vendors have no real option to find ALL the vmkernal or vmknic ports and then add them to the ACL on the Storage. It was not perhaps the most slickest/securest way to handle this problem, but in the absence of an API they had little choice. In this release it does appear to be the case that they resolved this, I would love to know how! Perhaps they are listening on the existing NFS traffic or just doing a ping to work out which IP route to the storage or not. The ones that fail to get a ping response – don’t get added. It’s either that or some funky undocumented fancy foot work from VAAI…. Using the Provisioning Wizard I created a new volume and NFS export called “salesdesktops”, and the VSC correctly assigned the IP address valid for the vmknic.
Once this volume was mounted and available, I cloned one of my Windows 7 View Images to the storage, and started to look at the Rapid Clone feature…
The rapid clone wizard is very similar to the previous release I reviewed last year – but one helpful improvement is the ability to specify the VM fold where you want to locate the desktops. Previously, the VM clones were created in the same folder as the ParentVM – which isn’t always where you want them to be. VMware View is particularly fussy about paths to objects in vCenter, so if you just moved them to another location this would break the VMware View desktop pool creation process. As with other storage vendors VDI cloning utilities the VSC offers a hand way to name/serialize the VM – and they not only support two different editions of VMware View, they also support Citrix XenDesktop too…
The other nice thing is they have updated the terminology if you select VMware View 4.5 as your broker – gone are the old terms of persistent/non-persistent – and in come the new terms of Dedicated/Floating.
After click finish… the copy process begins… all that is left for the administrator to do is to handle the View entitlements…
Note: You’ll notice that I’ve got display name of “Windows 7 Sales Desktop”. That was added by me, by hand, using the View admin tool – the VSC doesn’t set this attribute. If the field is left blank users see the “Pool ID” in my case “salesdesktop”. There are some naming restrictions with Pool ID values in View – so special characters and spaces cannot be used… Also notice that this is a “Manual Pool” not an automatic pool. So it's not possible to set max, min and start-with options as you would with an automatic pool within View. If you needed more virtual desktops for the sales people you would need to crank-up the VSC, and do another set of clones.
Backup & Recovery
As with the other main components of the VSC, the Backup & Recovery part needs to be told of your available NetApp Arrays
For me this is a new feature – but also an old feature. I said earlier in the article previously NetApp had a technology called SnapManager for Virtual Infrastructure (SMVI?). What they have done is ported the UI for this into the VSC so it's now fully integrated with vCenter. It’s called “Backup & Recovery” and what actually does leverage the VM snapshotting capabilities of NetApp. So think of this as being similar to snapshots in VMware – but driven by your storage vendor. It’s possible to take multiple snapshots before you make a major upgrade of software – and then revert back to the previous version if there’s a problem. It’s possible to power on VM with snapshot along side an existing one – and use it to restore files that may have become lost or corrupted. The Backup & Recovery options from NetApp also allow for schedule – so you can say that VM is a snapshot at a specified time or interval.
So if I select “Backup Now” on this VM, that will create a NetApp Snapshot. In my case I decide to take a snapshot of my View 4.5 Connection Server – prior to running the upgrade to View 4.6.
Once the “backup” has completed its possible to see it by using the “Mount” option which displays the currently available backups.
This mounting of the backup triggers a storage refresh on the affected ESX host – affectively mounting the NetApp Shot as new NFS target.
This can be browsed like any other datastore, and you can locate the VMX file of the VM – and register it with vCenter. Of course you’ll need to be careful – by giving a unique name, and patching it to network location that doesn’t create an IP conflict with the existing VM that may be still powered on. If you do power on this snapped VM be prepared for the UUID question asking if you “Moved or Copied” the VM because it could potentially be powered on a different ESX host from where it was snapped from.
If you do need to do a “Restore” of the entire VM, then its merely a question of selecting that from the plug-in menu, and selecting the “backup” you prefer. The field picker in this case allows you to restore ALL the virtual disks of the VM, or just some of them if you have a multi-disk VM.
NetApp’s VSC builds on and improves on the separate plug-ins of SMVI, RCU and VSC… there’s a little bit of tidying up to be done – such as having one central point from which all the components point to for the array configuration. But I think that may well be quite a minor bit of engineering to do. I’m looking forward to getting my paws on the next version of System Manager – the rumor has it that its no longer based on Microsoft MMC, but built on top of web-browser with Java. I have suspicion that the long term plan will be for Systems Manager to usurp FilerView as the main management front-end.
Next up will be Equallogics and their HIT/VE for VMware… Once I’m done I think a thought-piece about these provisioning tools might be in order for TechTarget… Like its great that this storage provisioning tools exist, but as VMware Admin, will the Storage Team ever allow you to carve up chunks of storage without them being directly involved…?
Finally, if your troubleshooting NetApp VSC issues then this doc would be good start – http://communities.netapp.com/docs/DOC-6534
This was first published in March 2011