A couple of weeks ago it was my good fortune to attend a new class on Microsoft Desktop and Application virtualization. If you’re interested the course is 10324A – Implementing and Managing Desktop Virtualization, and covers some of technologies in the Microsoft Desktop Optimization Pack or MDOP. The MDOP is available to those who have a Volume License agreement and who have purchased a Software Assurance (SA) with Microsoft. The MDOP...
is really bundling of related technologies into a single SKU and includes the following tools
- Application Virtualization (App-V)
- Microsoft Enterprise Desktop Virtualization (MED-V)
- Advanced Group Policy Management (AGPM)
- Diagnostics and Recovery Toolset (DaRT)
- Desktop Error Monitoring (DEM)
- Asset Inventory Service (AIS)
Whilst the course mentioned all the of the tools its fair to say that main emphasis was on MED-V and App-V from the MDOP, but also adds modules about “User State Virtualization” (aka Profiles and Group Policies) and also covers how to setup and configure Microsoft’s Remote Desktop Services (RDS) to allow end-users to access a shared desktop on what we once called “Terminal Services” as well as accessing a virtual desktop (VDI) environment. It was with some irony that I found the hosted-VDI approach put to the end of the course. It would be tempting to interpret this as Microsoft “downgrading” the significance of virtual desktops running on a datacenter hypervisor. The reality is that its position in the course owed more to its dependencies on the RDS components that must be configured first. It’s also not unusual for vendors to spend more time on their strongest technologies rather than their weakest – in an effort to show their best side to the customer.
The course did involve quite a bit of installing stuff only to use snapshots on HyperV to revert those VMs back into a clean state. I understand the reason for this is to allow each part of the course stand independently from each other – to allow it to be customized to the customers needs. Personally, as a former instructor I prefer the style of course where over the week you build up a solution piece by piece until you are left with all the pieces in place. So to some degree the course was a missed opportunity to see AppV, RDS, VDI, HyperV all working in harmony to produce a desktop environment.
With that said, as an overview of what Microsoft is doing in this space it was an excellent course, and it was much quicker to spend time on a training course – than it would be sat in my lab environment with no instructor to interrogate. The course has a very wide view of VDI – so took in the use of running virtual machine on end-users desktop PCs in the form of XP Mode and MED-V, as well as looking at more the “hosted” environment where end-users get their desktop from a centralized bank of servers. So what I want to do is take each technology in turn, and give you my personal take on the product starting with virtual PC and XP Mode.
It’s not my intention to do any detailed “competitive analysis” between Microsoft and the other vendors. That’s the subject for a different article. I want to assess the technologies on their own merits, whilst at the same time remembering that there is more than one ISV in the world. If you are looking for a more competitive analysis approach I would heartily recommend reading the virtualfuture.info “smack down” papers which stop short of “bake offs” from one vendor to another, but do have a handy matrix which allows you to compare and contrast the features in a matrix – its particularly strong on application virtualization.
Virtual PC and XP Mode
I struggled with what to call this section because Microsoft has two names for virtualization that runs on the PC – there’s the Microsoft Virtual PC 2007 SP1 product, and there’s the Windows Virtual PC product which is part of Windows 7. There does seem to be some tidying up needed over this area. For example, whilst Windows 7 uses the Windows Virtual PC product and supports XP Mode – the MED-V product uses the older Microsoft Virtual PC 2007 edition. This older version does not support the XP Mode feature – but given it’s relatively recent update you can very easily enable an XP Mode like feature if needed called RemoteApps.
But perhaps I’m getting ahead of myself – what is this XP Mode and what is it used for? In case you don’t know, Windows 7 users have the opportunity to run a version of Windows XP. This allows them to seamlessly run native Windows XP applications directly from their Windows 7 desktop. The experience is not dissimilar to VMware’s Fusion product which allows you to seamlessly run Windows applications on a Mac with what’s called “Unity” mode. I must say I do like this “seamless” approach the host and guest operating systems – and its something I would love to see the other virtual desktop vendors to do. At the end of the day, users don’t give a damn about where their applications come from (local OS, local virtual desktop, hosted virtual desktop or terminal services) – what users want is their app – and the more integrated these delivery mechanism are – the easier the adoption by the end-user.
XP Mode is Microsoft strategy for enabling folks to migrate from Windows XP to Windows 7. You should know that the XP Mode doesn’t come with any method of centrally administering these Windows XP instances – as it is currently incompatible with the MED-V product which uses a different version of Virtual PC’s. So it’s really designed for SMB/SME’s who have such a small number of PC that this doesn’t pose a problem. If a large business wanted to take this approach to its legacy Windows XP applications, Microsoft would probably say that looking at MED-V or RDS with Virtual Desktops would be a better approach. As such this part of the course was of academic interest to me because my focus is generally on customers with significant install bases. I did however like the integration of the two operating systems into a single “workspace” – but it would only be of interest to home users or the SMB market.
Microsoft Enterprise Desktop Virtualization (MED-V)
If I were to describe the MED-V technology – I would say it’s just like XP Mode but with management. It was original designed and built by Kidaro who Microsoft acquired in 2008. It contains a “Virtual Image Repository” which acts as a image store that allows for centralized deployment, management, and monitoring of virtual machine images. This repository is essentially an IIS web-server, and some work needs to be done to register the files used by MED-V in the “MIME” types area of IIS to specifically register support for .ckm files or “Compressed Kidaro Machines”.
The deployment of these images together with the “policies” of MED-V construct a “workspace” which is combination of the local applications, and the application inside the locally executing virtual desktop. Sadly, you cannot take existing VM images from HyperV and import them into MED-V. Although there is conversion to tool take Virtual PC images into HyperV, no such tools exist to convert HyperV images to make them useable by MED-V. The provisioning is based on Microsoft AD DS and you can use MED-V policies (which refresh every 15mins) to configure usage. This allows for a seamless and transparent integration of published applications with local applications, and offers features like clipboard sharing and printer redirection. MED-V does also support browser redirection controls, so you could set which URLs open say in IE6 rather than IE8. Of course being a Microsoft product this browser redirection is only there for Internet Explorer, rather than any other competitor web-browser such as Mozilla FireFox.
The virtual machine image is downloaded to the local machine, and after that only the changes or “differences” are sent down to the local machine on configurable intervals. This delta updates process is referred to as a “Trim Transfer” in MED-V parlance. If you come from a VMware background, probably the closest thing to MED-V is their VMware’s ACE technology or perhaps “local mode” in VMware View. MED-V publishing feature allow you to build “workspaces” which present just the application the user needs from their VM. Of course it’s quite tricky to work out exactly how much network IO would be consumed by MED-V in this process, especially during the first-use phase. I’m afraid the best answer to this is “it depends” – it depends on the size of image, how much overlap there is between the local files and the files of the MED-V image, and how much bandwidth you have.
Note: In this screen grab the .EXE lists what applications are visible from the MED-V image. These have to be typed by hand as there is no browse function available into the MED-V image. You need to be a little careful hear in case the VM image doesn’t have an environmental path to the location of the .exe.
There are some practical concerns that I think could stymie MED-V’s adoption. Firstly, it’s still pitched at delivering legacy operating systems such as Windows 2000 and XP, and their legacy applications. It currently doesn’t support running Windows 7 inside the MED-V images. So at the moment it’s not a solution that help you roll-out Windows 7 to your environment. Of course that could change, and if Microsoft stick with this strategy, when the next version of Windows is released, I dare say MED-V would get an update to allow to run MED-V on the next-generation of Windows – with support for something like Vista and Window7 as the guest operating system. For the moment it seems clear that Microsoft’s strategy surrounding MED-V is as way of delivering legacy OSes down to the local PC. It’s not intended by design to be a way of rolling-out Windows 7. Of course, that doesn’t mean you can’t use Window 7 as the host with Windows XP as the guest which is what was originally intended.
Secondly, the images of MED-V can clearly be quite large – what ever the disk foot-print is of your slimmest Windows XP build – would be pushed out to a significant number of physical PCs. There are many ways to deploy MED-V images to the PC and some of those include out-of-band methods such as DVDs and USB memory sticks. But personally I know most corporations would want to use some existing network enabled method – which ever way you cut it that’s quite a lot of network IO to put out there. For some MED-V will look like a sledgehammer to break an egg. Let say you want to run Internet Explorer 6 for a legacy application portal that was incompatible with the newer browsers – you’d be downloading GB’s of data for browser that is only a couple of MB in size. There are a plethora of tools out there on the 3rd party market that would allow you to achieve the same result with a much smaller footprint such as Browsium’s UniBrows, Spoon (formerly Xenocode), InstallFree, VMware ThinApp and Symantec Endpoint. Of course the official line from Microsoft is that using such tools to virtualize IE6 would be a breach of their support and EULA agreement.
Thirdly, you’d have to be a little careful to make sure the local PCs were of sufficient spec to run both their local OS as well as their virtual OS. This is a problem I’ve seen with all “local” execution of VMs on PC platforms. It’s also the Achilles heel with VMware View “Local Mode” and Citrix’s XenClient. VMware View assumes that you have enough memory on your PC to run the virtual desktop “cached” to the disk – but try running a 4GB Windows 7 image on a 1GB Windows XP laptop. It won’t power on. Conversely, Citrix XenClient is so bleeding edge its hardware support is a challenge – and it hardly helps customers wanting reuse older PC’s or extend the lifetime of their existing hardware. One thing I did like about MED-V was it capacity to downgrade the amount RAM allocated to the virtual machine based on the amount available to the host operating system. That’s something I would really love to see Microsoft competitors do with the ability to cache hosted virtual desktops to the end-users machine.
Now, I’m not writing off these technologies – MED-V, Local Mode, Client Hypervisor – but they come with such baggage that it makes them solution that business that would select tactically – when and where they need to – as opposed to a “first choice” option.
Remote Desktop Services & Virtual Desktop Infrastructure
It’s been sometime since I looked at RDS as a serious alternative to say Citrix XenApp. I know some people will take issue with sentence, as the “official” line has always been that Citrix XenApp compliments rather than replaces Microsoft RDS. There has been and continues to be a close co-operation between Citrix and Microsoft. I saw that last year in Citrix offices in Dublin with the EMEA Readiness Team. It’s clear that Citrix intends to adopt Microsoft GPOs as its main method of getting settings into the environment, and they have adopted MMC as the main management tool for their products. So I understand that co-operation. My point is that customers have always asked or told me that “in a few years Microsoft will catch up with Citrix, and then we won’t need Citrix anymore”. It’s much the same mantra that’s repeated about VMware. You know the whole “sunset” angle. I remember this being said to me back in 1997, and guess what Citrix are still here…
So I was coming from a specific angle – has Microsoft progressed its development of RDS to such a degree that they represent a “good enough” option that would make Citrix XenApp a nice-to-have, rather than a must-have? I think the answer is not quite – but certainly much progress has been made in the intervening years since my involvement in this space – and Microsoft is tantalizingly close. In my day Microsoft didn’t have a “gateway” or web-service to advertise the availability of the desktop – they do now. They also didn’t have a way of “publishing” the desktop or applications – they do now – it’s called RemoteApp. However, there’s close and not close enough, and I think that largely RDS falls into that second camp.
Firstly, despite the obvious enhancements to the Remote Desktop Protocol (RDP), it still falls short of Citrix ICA or HDX – and there’s no obvious contender from Microsoft to VMware PCoIP. Of course on the horizon is Microsoft RemoteFX in the shape of Service Pack 1. But I think it would be wrong to conflate RemoteFX with either Citrix HDX or VMware PCoIP. RemoteFX is more angled towards delivering high-quality graphics across a LAN (not a WAN) using acceleration on the GPU – as opposed to delivering that quality of experience via protocol alone. In these respects it’s perhaps closer to the blade PC approach or the use of Teradici cards in thin-clients to deliver a quality graphics experience to folks like CADCAM users.
Secondly, the feel of RDS is very similar to the feel of Citrix – in that there is quite a number of different roles required to make the stack work, and of course in a production environment each of these roles could be doubled to allow for load-balancing and availability.
In contrast, most VDI solutions are much simpler. There will be a role that has level privileges to access the management system of the virtualization layer, and the Microsoft Active Directory service – that sits on the private network – commonly referred to as the “broker”. Then there will be a role that is hardened and place into the DMZ that is purely used to traverse the firewall. It’s fair to say that much of the roles that exist in RDS could be doubled up – so the RDS licensing could reside on a domain controller, and the RDS Connection broker could reside on the RDS Session host – which are just virtual machines residing on a Hyper-V server (or an RD Virtualization Host).
It’s tempting to equate Microsoft RemoteApp publishing feature to that of Citrix XenApp’s publishing – and from a functionality perspective they are similar. In that they both have the same job – they act as method of “advertising” either applications or desktops to the end-user. Unfortunately, the process of publishing in Citrix XenApp is still better despite all the years that have elapsed since MetaFrame 1.8. With Citrix you go through a simple wizard browsing for the application to make available; assign a group and then from select from a list the XenApp servers that provide it. With Microsoft RDS – one server is used to build the list of RemoteApps – and then these have to be exported and pushed out to each RDS Session host in the environment. Each RemoteApp is actually a .RDP file which is created, and which needs digitally signing to stop any unwanted pop-up messages appearing.
From a management perspective, that’s quite a lot of work if you have lots of applications and large RDS server farm to control. Quite often folks will say – well that isn’t much work – but they forget that in the enterprise the number of terminal servers could be quite large – for example the guys on the course from the insurance company have over 350 individual Citrix XenApp servers to administer at anyone time. I guess the other delivery option is rather than using the RDS Portal to allow access to these .RDP files you could copy them to the client, and fit them up with a shortcut. To do this you would have to build an .MSI for each and every application, and then push that out via Microsoft GPO’s or some other engine. All that just to get a shortcut on the desktop… For me if you were going to opt for RDS you would want to couple RemoteApp and what’s called “Desktop Connection” as the easiest and most seamless way of delivering the applications to the user – sadly that’s only available to Windows 7 users.
On the plus side, what I would say is that at least Microsoft has an offering! One thing I’ve been saying for sometime is that the value of what we used to call “terminal services” hasn’t gone away. Sadly, there’s been so much talk about virtual desktops that the marketing PR juggernaut surrounding the VDI brigade have somewhat overwhelmed this known stalwart of the datacenter. In fact I go so far as to say that a combination of terminal services in a VM together with application virtualization, has given this model a shot in the arm by ending scale-out challenges and application conflicts of the previous generation. Could it be that the reason VDI hasn’t taken off in the way some folks hoped is that this old method is tried, trusted and improved by virtualization projects of the previous decade. For sometime I’ve been saying that the VDI vendors are missing a trick by not having a fully rounded application delivery strategy. For me personally, that strategy has to include hosted-VDI (with the option to run locally), application virtualization with the ability to deliver both to virtual and physical machines effectively, and some sort of terminal service offering – that goes beyond merely integrating with either Citrix XenApp or Microsoft RDS as if they were “legacy” system.
On the down side, the deployment of new virtual desktops with RDS is still very much dependent on creating images of the operating system, and then using sysprep and sysprep.inf files to automate the process of joining them to the domain. This is true of both local virtual desktops with MED-V, or hosted virtual desktops with RDS. For the average Windows Admin this won’t be huge problem – but if you look at competitors in this space they generally come with a setup that allows you to separate the cloning of a new VM from its customization. In this way you only need one VM image, together with many guest customizations. Additionally, these competitor VDI brokers often come with their own method of managing the joining to domain process such as VMware’s “QuickPrep” system which pre-populates the domain with computer accounts directly created in Active Directory. This is much quicker than the Sysprep format.
Occasionally, this reliance on sysprep.inf made me feel that deploying virtual desktops either via MED-V or RDS is no more easier than deploying the operating system on to a physical machine. It feels like a missed opportunity to make virtual deployment slicker and more efficient than what we used to do with disk cloning technologies like Symantec Ghost. This manual preparation doesn’t stop there. For example if you want to create non-persistent pools in Microsoft RDS, the way it is done is by creating a hard-coded snapshot with the label “RDV_Rollback”. Of course if you had many virtual desktops in the pool, you would have to use PowerShell to set each of these snapshots up. There are other aspects of RDS virtual desktop management that seems odd. For example you assign users to desktops from the Virtualization Host .MMC but once assign there you cannot view the permissions. These permissions are written up to the domain controller, and exist as per-user permissions on user objects in AD. So if you want to re-assign or un-assign the desktop from a user it’s to AD you must go.
User State Virtualization
I must say we negotiated with the instructor to move this section of the course to the very end, and because of my travel arrangements I had to leave the course before the module was completed. I have however gone through the training materials in my own time, and I wasn’t surprised by the content. One of the things I noticed in the course headings was the attempt to add the word “virtualization” to some technologies that in many cases predate the use of virtualization in the x86 space. So Microsoft RDS was tagged as “Presentation Virtualization” and technologies such as roaming user profiles and GPO’s were tagged as “User State Virtualization”.
The reality is that many customers have found User Profiles to be major pain point in their environment. The are the bête noire of most Windows Admins daily lives, and become such a thorn in the side of most environments that an entire tier of ISVs exist just to try and take the pain away. In fairness to Microsoft great strides have been made in recent years to reduce profile pain. But with that said it’s competitors still see an edge including some sort of improvements in user profiles to both keep them small, and make their download at login to a minimum. For example Citrix has a profile-streaming feature that is part of its Provisioning Server, and VMware successfully acquired RTO Software’s virtual profile system, albeit failing to include it in their recent GA of VMware View 4.5. So I’m afraid taking a technology that first surfaced in NT4.x, uplifting it with improvements – does not “user state virtualization” make.
For me the star of show and the course was App-V and I was fortunate to have an instructor who clearly had a strong understanding of the product. And it’s for that reason I’ve placed App-V here in my article – I wanted to save the best for last.
If your main interest is App-V you might want to consider the three day course where App-V is the sole topic called – Microsoft Course 7197APPV – Installing and managing Microsoft Application Virtualization. My interest was simply a general assessment of the capabilities of the technology. In my experience training on technologies like application recording, sequencing, packaging or virtualization are useful jumpstarts, but where you really learn is doing it in the real world, with real applications. I was fortunate to have two guys on the course with me who work for a major insurance firm in the UK, who had been working with App-V for sometime. They were really useful to have in the room to ask questions, because they were able to give their real world experience back to the group.
So let me say from the get-go I really like App-V. I think its strongest suite is not so much the virtualization of applications itself – a process that Microsoft call “sequencing” – but its management components. For example there is a proper front-end to App-V which allows you to make changes once the sequencing is over, and I liked the fact you can control which operating systems the App-V application can execute on. Although I think you need to be careful with this – what if a new OS came out? You probably would want to avoid having to correct this for an application that doesn’t need re-sequencing.
One of the aspects that other application virtualization vendors seem especially weak on is the long-term management of the application itself. Most seem very focused on the “packaging” or “recording” process, but are weak when it comes to easily publishing the application to end users, and storing the metadata that its involved in managing that application for the long-term. I particularly liked the way App-V has its own internal method of upgrading applications from one version to another, with the client only having to stream the new parts of the application. As with other application virtualization technologies there is some element of hand editing the files that make up the streamed application for troubleshooting purposes, but the vast majority of settings are visible in a management UI – not tucked away in a .INI file as is the case with VMware’s ThinApp. It’s worth mentioning that a great resource for learning the manual settings that can be added to the .OSD file within App-V reside on the TMurgent Technologies website. These settings can be useful in the small number of cases where an out of the box sequencing of an application doesn’t work. One setting that appears to be useful is in the “policies” section called:
< LOCAL_INTERACTION_ALLOWED > TRUE < /LOCAL_INTERACTION_ALLOWED >
This policy setting lowers the level of separation between the App-V application and the local operating system that is sometimes required to make a legacy application function. The .OSD also aids as being a good location for calling scripts to both prepare the environment prior to the application being loaded, and also deal with clean up operations – such as when applications wait for one process to terminate, before the whole process ends. With that said you might be better using the .OSD file and adding the value.
TERMINATE CHILDREN = “TRUE”
Yes. It does sound like a weird thing to type – “Terminate Children” but I’ve met many a stressed out parent who has felt that way sometimes…
I was pleased to see the App-V does offer controls that allow you to set how many instances of an application can run – to guarantee you to meet your license agreements by concurrency. Due to the complexities of licensing I found some application virtualization vendors shirk there responsibilities in this area. Often they will say – look, we are about application packaging and distribution – speak to your vendor about licensing. I can understand why some application virtualization vendors see this area as being too complex to address in their technologies – but I was heartened that App-V at least made an attempt to offer management in this area. With that said, the licensing engine is a little bit clunky. Fortunately, I had a good instructor who showed how you could create a default “provider policy” for all applications that don’t require any licensing limits. This policy would be applied to applications such as Adobe Acrobat Reader and Word 2008 Viewer. He then demonstrated how to create a provider policy that allow for concurrency limit for the rest of the applications. It was one those situations where it was only obvious until it was shown, and for me was a reminder of how having a real instructor in the classroom – helps draw your attention to problems you might not have considered, whilst at the same time pointing out the workarounds.
That’s not to say there aren’t challenges to deploying App-V. In the course I was able to identify two main ones – protocols and drive letters. App-V supports many protocols for streaming the application to the end user. It’s useful to see this process as involving three stages. First the end-users client must authenticate and receive the “applist” which is the list of applications that users has rights to. Secondly, when there is a download of two core files that include an icon file (.ico) and the file (.osd) that is used to tell the client where to stream the application from the AppV content location. This content location is where you upload the App-V application after “sequencing”. If you have multiple content servers for load-balancing and availability, you would probably use DFS replication the content across them – to keep the content location insync as and when you make changes to the App-V application
Finally, once the user clients the .ico file and loads the .osd file they are then sent the .sft file. This is the file that contains the application to be streamed, and can be divided into two parts – “feature block 1” and “feature block 2”. The first feature blocks contains the core components needed to run the application, and can be customized to include additional components that you know the user is very likely to need such as a spell checker whereas the second feature block contains the remained of the application including components that may or may not be needed by the end-user.
I would also like to mention that I liked the fact that App-V comes with both a “datacenter” model and “branch office” model. This allows you to distribute App-V applications around your network. There’s a rather interesting configuration that would allow you to split the App-V application in its various parts by modify what is called the “ApplicationSourceRoot”. This would allow you to host the smaller files that make up the App-V application – at the datacenter, whilst making sure that when the streaming takes place – the network traffic generated occurs relative to where the end-user is located. For me this allows for a best of all worlds configuration where the customer can keep control over who has access to what from a central location – without creating unnecessary network traffic across the WAN. Of course this configuration would be not needed in a more “hosted” environment where users get their desktop from terminal servers or virtual desktops held centrally in the datacenter. App-V comes with many different options on how to get the application down to the end-user – deployment options which I have found lacking in other application virtualization vendors offerings.
Multiple protocols are supported at each of the transfer stage, and the table below outlines the options:
|Application List||RTSP, RTSPS|
|Application Launch Files (.ico, .osd, .fta)||CIFS/HTTP|
|Application Streaming File (.sft)||RTSP, RTSPS, HTTP, HTTPS, SMB|
Note: The protocols in bold are the default settings. The .fta file contains the “File Type Association” which control which application is loaded when a user double clicks a file.
On the surface this looks like we have been furnished with a rich array of many options – but this “choice” disguises an underlying series of trade-offs and implementation decisions that will vary from one deployment scenario to another. Firstly, notice how there is no one protocol you could use to make the whole of the communication work – and this could make life difficult for you if you intend to stream application through firewalls. That may not necessarily be a show-stopper if you intention is to use App-V to stream applications to either a Microsoft RDS or Citrix XenApp environment because both the App-V content server and hosts would likely to be in the same datacenter on the same high speed network – the same would apply if you were using virtual desktops hosted on VMware ESX using VMware View.
Secondly, its unlikely that you would have just one App-V content server – most likely you would want to offer load-balancing and availability. This would mean using Microsoft built-in NLB service or some third party load-balancing system. Some of these secure protocols such as HTTPs/RTSPS do not play nicely with load-balancing technologies, as this very security prevents load-balancing protocols and systems from analyzing their packet data. Now, that might lead you to think that the best protocol to use for an internal only system might be the HTTP protocol. Indeed there are now ways to deliver the application list via HTTP. However, if you do use HTTP then you will lose features such as the differential “Version Update” feature that I was so impressed by in the course. Just to be 100% clear, you can do version updates with the HTTP protocol, the difference is the application has to be completely re-streamed to the client – whereas if you use the other protocols – only the differences need in the new version to be downloaded. That could be important when managing lots of physical machine or virtual desktops.
Note: The .OSD path and Icon path in the Default Application Properties dialog show some of the protocols at play in App-V
Thirdly, additional complexities are introduced on this protocol front once you are dealing with laptops that exist outside of your management – and delivering App-V to physical desktops. From what I can gather most companies treat laptop users as an exception to their rules – and cache all the App-V application to the laptop before the user takes the machine away. In this way, they work around the protocol issues and allows the laptop user to run App-V applications offline. These laptop users get their updates, the next time they connect to the corporate network. Most business is fortunate that their true roaming use base is quite small. The main benefit of AppV here is not the “streaming” piece but the application isolation – that means less conflicts are likely to occur between one piece of software and another. But I think going forward this will be a challenge for ALL vendors of application virtualization. So it’s not a challenge that faces Microsoft alone. We are all increasingly mobile, and the ratio of mobile users to the “work at the same office every day, Dilbert-style” user is changing rapidly.
The other challenge to businesses wanting to use App-V is deciding on a drive letter to use for the client cache. In case you don’t know App-V defaults to using the Q: drive alias to store cached versions of the App-V application. Permissions are set on the drive letter to prevent even the administrator accessing it. When an application is “sequenced”, the App-V sequence defaults to this Q: drive, and the value is encoded as part of the application. If the Q: drive letter is unavailable because it is used say by a mapped network drive letter – then App-V will fail. There is no easy way to change this drive letter if this error in planning hasn’t been made – and to fix it would require the re-sequencing of all the App-V application affected. In reality this isn’t the end of the world. But before you embark on an App-V project you would want to closely analyze your environment to avoid such a conflict occurring.
You might see this as relatively trivial issue, and in all honesty it is – but if you think about a large corporate, agreeing on common drive letter across the business might take longer than you think. For example with the guys in the insurance company next to me, it took them more than a month to decide on the correct drive letter to use. As ever in the world of IT it’s these soft management issues that seem to be the nub of delays rather than any specific technical issue. This dependency on Q: drive can cause you additional challenges if your application is hard-coded to run in C: and doesn’t allow you to change the installation path to the Q: drive during the sequencing process.
There was quite a heated debate in the classroom (mainly caused by me refusing like a dog to let go of the bone) about the merits of solutions like MED-V compared to application virtualization like App-V, Spoon or ThinApp. The official line was that application virtualization couldn’t solve the kind of operating system incompatibility issues that arise from trying to port an application from Windows XP to Windows 7. Even companies like VMware recommend that a ThinApp should be “recorded” on the operating system it was developed for – and executed on the same operating system. The official line from most vendors is that you should not record an application on Windows XP, and then try to execute it on Windows 7.
Of course ThinApp customers do that all the time whenever they create a ThinApp version of Internet Explorer 6 and make it run on Windows 7. The truth is that in 99% case you can use application virtualization to bring life to an application from Windows XP and make it run in Windows 7. That’s got to be much a more efficient way of handling a legacy application than using MED-V or giving a user a Windows XP and Windows 7 virtual desktop via VMware View. Again, we are back to not using a hammer to drive home a thumbtack. Of course there might be case where a Windows XP application does a check of the OS version and bounces a user out of the execution process. If that is the case I personally think the use of “shims” from the Microsoft Application Compatibility Tool kit (ACT) could be use to “fool” the application to thinking it was executing on Windows XP when it’s actually loading on Windows 7. Once again its important to remember that official line from Microsoft is that virtualizing IE6 is both against the support and EULA agreements.
This debate really got me thinking about the term “application virtualization”. What does it really mean? All the vendors use terms like a “virtual file system” and a “virtual registry”. But I got wondering about how “virtual” virtual applications really are. Application virtualization certainly doesn’t offer the same level of abstraction that VM virtualization offers – and there always seems to be some kind of dependency on the underlying OS in some shape or form. So that got me wondering if we could talk about different vendors offering different qualities of application virtualization, or put another way some vendors offer a greater level of application virtualization than others. The vendor that can offer the highest-level of application virtualization is the one who is going to offer the best chance of the application loading without errors or need a complicated build process. Perhaps I’m making more of this than I should. After all application virtualization offers a MUCH better way of managing applications than merely running an setup.exe and hoping for the best. So going forward, the question is: how easy it is to implement and manage that which really matters – over and above a largely academic debate about levels of virtualization. For me that’s makes App-V the strong card in Microsoft’s deck currently.
Of course if you do down the route of App-V there would be the client installation to consider. I must say I was a little bit put of by this. The App-V client has dependencies on two different versions of Visual C++ as it needs both the 2008 SP1 version as well as the 2005 SP1 version. Neither of these redistributables install as MSI files, which means although you can push out the App-V client via GPO’s you would have to find some other method to send out its dependencies as well. That’s along way from application virtualization that requires no client at all, and instead execute in separate runtime space.
It also compares quite unfavorably to many clients from Citrix or the VDI vendors that are a single .MSI file with no other dependencies. This raises the thorny issue of having at some stage to upgrade the client from one version to another. An issue that a couple of members of my group had faced recently – where a new version of the App-V client was meant to be backwards compatible with an older client – which was later discovered not to be the case. The App-V client itself has lots and lots of different options – which can be set locally or controlled by the GPO. I was surprised to see this because I had hoped the management system of App-V would be able control most of these. Vendors often “sell” these detailed client customization as a benefit, but in my experience they are often the bane of the system admin life. It took me back to my early days of Citrix and their “Program Neighbourhood” which was jam-packed with settings and options. In the most cases Admins want the client to actually have as little opportunity for customization as possible to stop users mucking about with settings they don’t understand. A portion of my time was spent showing students how the Citrix client could be bolted-down so users could do nothing with it. Fortunately, App-V does allow for a custom .ADM file to be imported into Active Directory. App-V does come with a very good management system for controlling how the backend infrastructure works, however the end user experience is controlled via the Microsoft Group Policy system. On the plus side there are plenty of diagnostic tools that come down with the clients such as sftmime.exe and sftray.exe – it would great if App-V could retain these troubleshooting utilities but offer a much smaller client footprint along the same lines as the Citrix web plug-in or its Citrix PNagent – basically an agent that sits in the tray, and offers zero-configuration to the end-user.
Finally, the last issue with App-V is not a technical issue but a licensing and marketing issue. It’s still the case that you cannot purchase App-V separately from the MDOP. The MDOP is only available to volume licensing customers who have purchased software assurance from Microsoft. Listening to the customers on the course, there was a strong opinion that packaging the product in this way prevented them from purchasing only the technologies they needed. The end-user licensing of MDOP is very cost-effective, but the fact that the MDOP is still tied to the SA agreement is seen as barrier to its wider adoption. I think this is an issue that Microsoft has tried to address by upgrading the MDOP offering to contain additional technologies since the R2 update – the notable edition being the inclusion of MED-V. But as we saw earlier if your not convinced that local execution of virtual desktop is a long-term strategy, you could argue that the bundling process prevents consuming the various technologies in a la carte fashion. From what I could gather the customers where essentially purchasing MDOP, merely to gain access to App-V.
The Microsoft course covered some of the common problems of application deployment, such as conflicts between applications and operating system. At times I was wondering – yes – this solves a problem, but who was the cause of that problem in the first place? Anyway, it doesn’t interest me in the least to beat up Microsoft or any other vendor for that matter. We are where we are, the only question that matters is if we heading the right direction, and what is the best blend of solutions given the situation we find ourselves. I will leave beating up Microsoft to the Linux and Apple folks!
Much though I adore and love application virtualization – I think it’s good in its own regard much like server virtualization – to the degree that it’s almost irrelevant which vendor you pick. We have to accept that not all applications will be virtualizable [Is that a word? It is now…]. That means, whether we like it or not, a percentage of applications will still have to be installed to operating system. If I was in charge of business application delivery this is the model I would choose:
Firstly, I would continue to use a terminal services style solution. If it had to be RDS then fair enough – but personally I would prefer some kind of bolt on enhancement such as Citrix XenApp or for those who were looking for something that was more cost-effective perhaps the 2X product. These TS boxes would ideally run on 64-bit OS running in VM on top of a hypervisor. This would allow me to scale out the solution without necessarily increasing my server footprint in the datacenter.
I would deliver my applications to these terminal services systems using some kind of application virtualization. For the 10%-20% of applications that respond negatively to application virtualization or were a corporate standard such as Microsoft Office – I would probably install those applications locally, with legacy flavours of Office being delivered on demand using application virtualization. Where I could construct a viable usage case I would deliver remote access to a virtual desktop essentially for “Power Users” who’s demands exceeded that which can be provided via the “shared desktop” users. For me VDI continues to be a strategic solution for large-scale customers – as scalability still seems to be its biggest Achilles heel. Don’t get me wrong. I know the server and storage vendors have produced documentation and blueprints for large VDI deployments – my worry is the cost of putting in the infrastructure to achieve that scalability – it’s an investment that not every customer can afford.
For my laptop users I would continue to install Windows locally, but I would again use application virtualization wherever possible, ideally with them being pre-cached on the laptop before the user takes ownership. Application virtualization is the key plank of any delivery solution. The desktop ONLY exists as environment via which users get their applications. The trouble we have is that there are so many applications, and they don’t always play ball with each other under the same operating system.
Application virtualization offers us a get-out-of-jail-card in a server-compute model where previously we had to have silo’s of servers to avoid application conflicts and support “legacy” applications. The only caveat to remember at the moment is the maximum size of an App-V application is currently 4GB, so if you have applications that are larger than this – then you may have to consider splitting them out into a series of smaller applications or install to the local machine. App-V does support “Dynamic Suite Integration” which allows you to link separate App-V applications together. This would allow you to separate suite into its constituent parts – and then blend them together in any combination you like. However, back in the real world, there was some debate within my group about how well supported that might be. In fact one of the group said he’d been advised by a representative of Microsoft not to use this feature yet – because it can make support and getting to the bottom of the problem harder. A case which might be great on paper, may not pan out in practice.
The reality is there never been more choice in the different ways to blend application delivery solutions together. For me application virtualization is no-brainer, if you intend to use terminal services. The difficulty with all this choice is all the complexities that come with that – the infinitesimal number of approaches and options. But remember these complexities exist as away of solving the problems of the past – the days of not being able to run what ever application you like, how you like, where you like – are well and truly over. But what ever you do – be sure to add application virtualization to your armory.
The only downside I can see in this multi-platform environment is you could find yourself doing application “sequencing” for multiple operating systems such as Windows XP, Windows 7 and Terminal Services. The other thing I would say is that application packaging has always been art rather than science, and successful adoption is highly depend on how well understood the application is internal, and the quality of the relationship with the ISV. The best folks to App-V sequencing are the folks who are already handling the packaging and QA of application already. As ever the community is your friend – its always worth looking for sequencing “recipes” online by those who have done it already – rather than re-inventing the wheel over and over again. So for example unpleasant things can happen to you if you do you do not pre-populate the ODBC environment with dummy settings. If an application install creates a DSN then the entire ODBC stack can be rolled into the recording, rather than just the changes created by the installer
With all these caveats stated I personally see it as inevitable – that just how companies adopted a “virtualization first” policy to new servers, we will shortly see the same policies applied to end-user applications. My hope is that in the long term is that those companies who own the application virtualization technologies will make there own applications already bundled in this format. So for Microsoft that means Office and other clients need to be available as App-V applications, and vendors like VMware need to make their vSphere and View clients available in ThinApp formats.