Using the EMC VSI vCenter plug-in

EMC VSI (Virtual Storage Plug-in) for vSphere automates four main tasks – this article is going to look at all of them.

First up is EMC VSI (Virtual Storage Plug-in) for vSphere. It automates four main tasks – and this article is going

to look at all of them…

 

  • Provisioning New Storage
  • Enhanced Storage Views
  • Full and Fast Clone (of interest to VDI and VMware View users)
  • Managing Mulit-Pathing Policies (to make things interesting I’ve added how to deploy PowerPath via VMware VUM)

 

One of the patterns I’ve seen as where previously the storage vendors had a plethora of individual plug-ins – they have all been putting engineering and development dollars into creating signal utility that would do shooting match or kit-and-caboodle. A case in point is EMC VSI (Virtual Storage Integrator plug-in). Now, you hard-code uber-storage-admins are probably going snort-and-sniff at this kind of thing – and say if you're a “real” administrator then you would want to crank up NaviSphere or UniSphere. I’m not sure I would agree. 

 

Firstly, because if a storage plug-in like the VSI covers 10% of the functionality that you use 99% of time – I would personally prefer to do my day-to-day tasks using vCenter – because once it comes to presenting iSCSI or NFS volumes – the last thing I want to do is go through the GUI or PowerCLI process to make sure the ESX hosts are visible. That’s not to say that UniSphere is a good management tool with great integration with VMware – it’s just about the quickest and fastest place to carry out a task.

 

Secondly, in the smaller environments you're likely to be the VMware/Network/Storage Admin – so it's a little bit easier to get approval for what you consume, when you're the guy who does the approving. But in larger shops – asking the Storage Team to create LUNs/Volumes is fraught with potential delays and misunderstandings, not least that “they” (the Storage Team) don’t appreciate that 30GB standard for a LUN/Volume doesn’t really address your needs and that asking for 10x30GB LUNs/volumes to create a VMFS extent isn’t quite what you were hoping for when you got out of bed this morning. 

 

Thirdly, if you're on the way to being a cloud administrator/configurator/architect – these barriers of VMware Admin/Network Admin/Storage Admin represent the “old” way of parceling up administrative rights – and you're looking for a model which cuts through these layers. Anyway, as it's in my environment I have to do all these roles – so anything that makes it quicker, simpler, or easier is something I will vote for. Perhaps the best way to sell a storage plug-in like the EMC VSI to your Storage Team is to explain to them that if they give you enough rights, you won’t ever need to call them up again. The truth is storage teams don’t much like you because you're a VMware Admin, and the less interaction the happier they will be – when you really do need help from them.

 

To get started with the EMC VCI you will need three components – the EMC Solutions enabler, the EMC VSI Plug-in and EMC Storage Array. By far the first two are the easiest to get your paws on – although they are not required I also have the NaviSphere CLI and the UniSphere Service Manager installed as well. The EMC VSI is client-side plug-in – so you would install these to a Windows PC, along with the vSphere Client. Once installed and the plug-in is enabled you get a shiny new icon in your vSphere Client under “Solutions and Applications”:

 

The VSI contains three core components – the “Unified Storage Management”, Storage Viewer and Path Management. The Storage Viewer has been around for sometime, and massively improves on the basic UI that VMware provides in its storage views. The USM part is what allows you to do funky stuff like provision new storage, and allocate it to ESX hosts. The Path Management part allows you to set the correct multi-pathing settings for your ESX hosts – without having to do it on per-ESX host basis. It’s essentially there to facilitate configuring EMC PowerPath once it has been installed. Remember EMC PowerPath is separate product, and unlike the VSI is not free.

 

Post-Configuration of the Plug-in…

Once you have the VSI icon some post-configuration has to be done – after all these plug-ins are clever but they don’t go out and auto-discover all the arrays that your ESX hosts are registered with, and enabled them for use by the plug-in. There is support Symetrix, CLARiiON, Celerra and VPLEX systems – and from my new EMC NS-120 I’ve got access to both CLARiion for fibre-channel, and Celerra piece for iSCSI/NFS support. The whole unit combines together to create one unified system – hence the brand name. So in the CLARiion parts you need to input the IP address of your SPA/B controllers together with the login credentials. For the Celerra you need to type the IP of the Control Station (essentially the management server of the whole system) together with your credentials for it:

 

The other task you will need to do for the Celerra is add it into the “Unified Storage Management”. To complete this properly you will need the credentials for the Celerra AND what is called a DHSM User Account. This can be done at the Celerra Control Station – or you can use had free graphical tool  by Nicholas Weaver from his nickapedia.com website called UBER Celerra DHSM Tool. Using this neat utility you can quickly connect to the Celerra and create a DHSM User Account. In case you don’t know, the DHSM (Dynamic Hierarchical Storage Management) account provides access to the File Mover API which traditionally has been used for moving data between tiers of storage but is now also leveraged for the cloning / compression:

 

 

Once the DHSM user account has been created, you can specify it when you add the Celerra to the list:

 

 

The other piece of post-configuration you can do, is make the EMC VSI aware of your VMware View environment if you have one. It’s possible in the EMC VSI to make it aware of the Connection Servers – and those into its list of resources. This will allow you to create new desktop pools from your client-based VMs (with the VMware View Agent installed, of course) to build out a VDI environment. You’ll find the option to do that under the “Unified Storage Management” link:

 

Provisioning New Storage…

Once all this has been done, you can then proceed to various views within the vSphere Client, you will find that EMC adds menus and options at various context points to assist you in your daily tasks. On the properties of a VMware HA/DRS/DPM Cluster (and elsewhere) you will get a EMC menu which allows you amongst other things to provision new storage, and if you select it you’ll get the option specify either a Disk/LUN or Network File System:

 

 

So for example if I used the Network File System option, I would be prompted in the dialog box afterwards to configure a datastore name, and then select the Control Station, Data Mover and Interface on the Data Mover that would service the IO requests to the NFS volume. In case you don’t know in the Celerra the “Data Mover” is the server that's responsible for dealing with all read/write requests that come to the system for iSCSI or NFS. Its terminology might sound strange – but if you think about it The Data Mover moves data about, and the Control Station allows you to control stuff, simple:

 

In the next part of the wizard you are able to set that you would like to create a new NFS export, and then indicate from which Virtual Storage Pool you would like to create the volume from. This allows you to use virtual storage, and also set a max/min values for the volume:

 

 

The Advanced button allows you to set options such as:

  • The export path
  • High Water Mark values
  • Where the volume is cached, and if it supports features like PreFetch, Compression, AntiVirus Scanning, ESX Timeouts, and what subnet to export to.

 

The really nice thing about this process is the wizard goes off and mounts the NFS volume for you on each and every host.

 

 

Note: The “infrastructure” volume is the one hosted by the EMC NS-120 Celerra, the other volumes you are seeing here are coming from one of two NetApp FSA2020. I say that just in case you think this article is an infomerical for EMC. It isn’t I’m interested in all storage vendors and how they integrate with VMware.

 

Of course these NFS mount points get created at Celerra and will appear in the UniSphere views as well.

 

 

There are some other neat options to be aware of with the VSI. Not least the compress, Decompress and extend storage options. These appear both the Cluster, Host and Datastore. The Compress/Decompress literally take the files that make up a VM and shrink them down to the minimum space. It’s currently a feature that’s limited to NFS volumes for the moment, and its one of the reasons I select NFS when creating a new volume. The “Extend Storage” does exactly what it says on the tin. It allows you take a LUN/Volume and make it larger. In the world of NFS when this has happened to me – I’ve normally had to crank up the vendors management tool, login, locate the volume and then make it larger – and then head back to vCenter – to refresh the ESX hosts so they see the new storage. Not so with the EMC VSI plug-in, it can be done as one seamless routine. Just right-click and bring up the dialog box:

 

 

Of course, you can use the same provisioning wizard to allocate fibre-channel blocked based storage to the ESX hosts if you prefer it. You do that by selecting the “Disk/LUN” option in the wizard. This will then allow you to select which CLARiiON you wish to use.

 

 

You can then select from which Storage Pool you want to carve your LUN from. In my case I just have one Storage Pool called “New_York_FC_Production_1.

 

Once that is selected – you can complete the whole process by choosing whether to format it as VMFS (or leave it blank for an RDM), what VMFS blocks size to use, LUN Number, and size. The nice thing about the wizard is that it's aware of EMC’s new “Auto-Teiring” policy features.

 

 

Once you click finish – EMC’s VSI takes care of the rest of the heavy-lifting… The VMFS volume label taken in the dialog box above is also set to be the friendly name of the LUN in UniSphere.

 

VSI Storage Views

Something that’s been around for sometime is EMC Storage Viewers, which adds additional visibility to your storage from vCenter. One of the good things about virtualization is that adds a leave of “abstraction” to your storage such that we can talk of a “datastore” with scant regard to what kind of RAID levels and so on those LUNs/Volumes actually provide. What Storage Views do for you is peel that all back and expose it right up into the vSphere Client. At the very least this saves you having to toggle between the storage management tools and the vSphere Client. 

 

So if you select the EMC VSI Tab and select a data store – it will show you this information on a NFS volume:

 

The LUNs option will show you the iSCSI or Fibre based LUNs from either Celerra or CLARiiON including in the case of the CLARiiON your multipathing settings.

 

Note: You might see how the option to “Hide PowerPath View” is dimmed here. That’s because I’ve yet to install PowerPath to the ESX hosts.

 

Full Clone/Fast Clone for VMware View

One product close to my heart is VMware View – I invested some of my time to writing a long guide to the product.. One thing I always hoped to add to that guide was storage vendor specific methods of duplicating VMs for those customers who would prefer not to use it is “Linked Clones” feature. If you want to play with this feature, it would be handy to have a basic View environment setup – all you need is a connection broker and template for your virtual desktop.

Once EMC VSI has been installed, where ever you see a VM or template in the vCenter inventory – you can right click it and ask it to carry out either a Full Clone or Fast Clone.

 

 

The Full Clone is backed by the Celerra system (NFS only), and works by taking a snapshot of the VM which it treats as its “parent”. In many respects that’s very similar to how VMware’s Linked Clones – the child VMs all point back to this parent. When the cloning takes place these delta VMs are created in the SAME volume as the parent. So your best off creating a volume (with EMC VSI of course!), and then cloning your “base” virtual desktop to this location before kicking off the fast clone process. In contrast the Full Clone creates a complete copy of the VM – and there’s no snapshot, parent or child relationship to concern yourself with. Of course, its the Fast Clone that will appeal to VDI folks – who will want to assess its capabilities against using other methods of deployment. After you have selected the Fast Clone option a wizard will spin up and you will be asked to select a cluster or resource pool for the clones, and then a dialog box will appear to control the number clones you need together with the naming convention – and so on. Personally, I think this is pretty self-explanatory.

 

 

Notice – how the maximum number of fast-cloned VMs your allowed in a single volume is 76.

[UPDATE: It turns out this number is not hard limit in the Fast Clone feature at all. Actually, the limit on fast clones is very high (in the region of 50,000 and beyond). But that introduces a difficulty. Technically, there'd be nothing stopping you from creating a bizzillion clones and overwhelming the cluster with more VMs than its capable of. Apparently, the VSI looks at the clusters CPU cores, and then does calculation to throttle back its scalability to more inline with the capabilities of the cluster itself.]

 

Then you pick out one of your View Connection Servers – remember in a View each Connection Server is essentially a “duplicate” of the rest – because their configuration data is synchronized to each other using Microsoft Light-Weigh version of Active Directory. The Connection Server appears on the list because I “registered” it with EMC VSI earlier in the post-configuration stages.

 

 

If you're familiar with View, the next couple of pages of the wizard should be very familiar ground – they represent a simplified version of the questions you would be normally asked by the View Admin if you were creating a new desktop pool. Of course, you don’t get the same “granuality” in terms of all options  – but I guess that’s the point. Again, the plug is reducing the amount of choice (or complexity given your perspective), and you could always crank up the View Admin page if you want to expose that level of detail. However you can do stuff like control if users can reset their own desktops, and what protocol can be used – interestingly the EMC VSI defaults to Microsoft RDP, rather than PCoIP…

 

So below I created a unique ID and display name for my desktops which will be used by my sales team. You do have the option to indicate if the pool is Persistent or Non-Persistent. That basically tells View whether the pools are sticky or non-sticky. With a sticky-pool users grab a desktop from the pool, and then keep it. Next time they login they will be returned to the same desktop – and critically it is there’s and no-one else can use it. With the non-sticky or Teflon desktop pool, users hand back the desktop to the pool at logoff. I guess you could say that non-persistent pools are more in tune with a vanilla desktop which is just used concurrently by a group of users. The Persistent pool is perhaps more consistent with a power users who have the chance to customize their environment, and have more lee-way in the system.

 

Now, to be strictly accurate – View 4.5 did away with this terminology, and replaced it with a whole new set of words – that mean (yes, I know this is annoying!) more or less the same thing. So in View 4.5 we talk “Automatic Pools” (these allow you to automatically create virtual desktops either by conventional cloning or VMware’s linked clones) and the terms “Dedicated” and “Floating” are now used to represent “Persistent” and “Non-Persistent”. I guess when EMC developed the VSI, they had in mind that there would be a lot of View customers perhaps still running on the older version of View. You pay your money, and you take your choice.

 

 

Once the cloning has completed, all that remains to be done – is to handle the View “Entitlement” settings. Notice how the pools is “manual”. That means if I had an 11th user coming along looking for desktop there wouldn’t be one for him or her. There are no settings here to say the pools maximum is 100, start of with 25, and once those 25 have been used spin out another 10. To do that you would need to use an “Automatic Pool” from VMware.

 

 

Note:
One final thing to say about the Fast Clone features – is you will notice how the only VDI broker supported for this level of integration is VMware View. If you look at other storage vendors plug-ins they also support importing the desktops into other brokers such as Citrix XenDesktop.

 

[UPDATE: Apparently, a VSI 5.0 is in the making - it may be ready for showcasing at this years EMC World. It seems likely that the VSI's scope will be extended to include other VDI brokers.]

PowerPath & Multi-Pathing Configuration

Note:
In order for the PowerPath plug-in and VSI Plug-in to work properly and not give you any funny pop-ups in vSphere Client you will also need the Windows version of the RTOOLs installed (EMCPower.RTOOLS.Net32.5.4.SP2.b299.exe). RTOOLs is very much like the vCLI from VMware. It is a set of command-line options used to send instructions to the host to query the status of PowerPath, and modify its settings. For example PowerPath will need licensing – and you can do this with combination of a .lic file and licensing server (shades of Virtual Infrastructure 3 if you like!), or in an unservered way by copying .lic files to a directory on the ESX hosts. Part of the licensing process involves using a command called rpowermt to find out the “host ID” of each host to which PowerPath has been installed – using the command
rpowermt host=esx1.corp.com check_registration

 

Once you have got this information – you think about licensing. EMC were very kind to put together some unserved licenses for my hosts – once they came through. All I had to do was copy the .lic files to the default location (C:Documents and SettingsAdministrator.CORPMy DocumentsEMCPowerPath) and then issue the command – rpowermt.exe host=esx1 register

 

Of course you can license a product that isn’t installed – I hope that’s kind of obvious. So I decided to use VMware VUM to push out PowerPath from a central repository.
 

As far as I can see you don’t need to buy PowerPath to take advantage of the multi-pathing configuration options. So you could use this plug-in to switch from one type of path-selection-policy (PSP) that is built into vSphere to another. With that said once PowerPath has been installed if the volume is owned by an EMC Array then it will automatically claim the paths to the storage. If you had some other array in the environment, such as fibre connection to a NetApp system, it would continue to use the vSphere PSP…

 

However, it's more interesting to cover this from a pure-EMC angle. Covering how to get the PowerPath software installed to the ESX hosts using VMware Update Manager, and then configuring the feature of a per-Cluster perspective. There’s a couple of ways of getting the PowerPath bundle into VUM, I personally found the “Import Patches” method the simplest. You start by first extracting the PowerPath_VE_5.4.SP2_for_VMWARE_vSphere-Install_SW_Bundle.zip…

 

Then in VUM – in the “Patch Download Settings” option under the Configuration tab, select the “Import Patches” link, and browse for the EMCPower.VMWARE.5.4.SP2.b298.zip file.

 

 

Once PowerPath has been imported, the next step was to create VUM Baseline that includes “Host Extensions”. This is a catch-all term used to include any 3rd party add-on which gets installed to ESX such as EMC PowerPath, Cisco Nexus 1000v or driver updates. So in the Baselines & Groups tab, you hit the “Create” link, give the baseline a name and select the “Host Extension” option, and in the next part of the wizard pick out EMC PowerPath 5.4 SP2 as the extension you want to push out.

 

 

Once created the baseline can then be attached to all the ESX hosts in a Cluster.

 

 

and then you can consider kicking off the Remediation process…

 

 

Once PowerPath has been installed you can see that by default it claims all the paths to the storage. You can see this in a couple of places – firstly from the VSI Storage Views, and also from the standard VMware path management dialogs where all the Paths are marked as “Active”:

 

 

Once you are happy that PowerPath is installed and is successfully claiming the rules to your storage – you can do some further configuration. Normally, you would have to the esxcli command at the Service Console, or the equivalent in PowerCLI/vCLI. But if you have a lot of hosts (say 32 in a cluster) that could be a bit of a pain. So, to the rescue comes the EMC VSI, which has a right click on the cluster to configure your path policies.

 

 

I must admit I was a bit unsure what all these various options meant – so I ping’d one of my EMC vSpecialist buddies who helps with all my EMC Storage stuff for his advice on these. This is the info he gave me.


Least blocks (lb) A load-balancing and failover policy for PowerPath devices, in which load balance is based on the number of blocks in pending I/Os. I/O requests are assigned to the path with the fewest queued blocks, regardless of the number of requests involved.

 

Least IOs (li) A load-balancing and failover policy for PowerPath devices, in which load balance is based on the number of pending I/Os. I/O requests are assigned to the path with the fewest queued requests, regardless of total block volume.

 

Adaptive (ad) A load-balancing and failover policy for PowerPath devices in which I/O requests are assigned to paths based on an algorithm that takes into account path load and logical device priority.

 

Basic failover (bf) A failover policy that protects against CLARiiON SP failures, Symmetrix FA port failures, and back-end failures, and that allows non-disruptive upgrades to work when running PowerPath without a license key. It does not protect against HBA failures.Load balancing is not in effect with basic failover. I/O routing on failure is limited to one HBA and one port on each storage system interface. This policy is valid for CLARiiON, Symmetrix, Invista, VPLEX, and supported CLARiiON optimization

 

(co)
A load-balancing and failover policy for PowerPath devices, in which I/O requests are assigned to paths based on an algorithm that takes into account path load and the logical device priority you set with powermt set policy. This policy is valid for CLARiiON storage systems only and is the default policy for them, on platforms with avalid PowerPath license. It is listed in powermt display output as CLAROpt.

 

As I understand it – BF is the default – and it doesn’t have load-balancing turned on. So therefore it's an absolute must to change the policy either using the EMC VSI or using powermnt set policy.

This was first published in March 2011

Dig deeper on VMware Resources

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close