This week I was done in Brentwood, London at EMC location there. I rather cheekily invited myself along by approaching...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Chad Sakac. It's an internal event which is meant to be purely for EMC employees, but I managed to "gate crash" the party and sit in.
Anyway, it was a great two days and super-intensive. I learned a truck load of really interesting stuff, some of which is in the realms of NDA and competitively sensitive. So for me to continue I'm going to have to be very careful to make sure I stay on the right side of Chad & Co. Firstly, I had no idea of the breadth of EMC integration with EMC, and with EMC being such a big company, it was no surprise I was ignorant. In fact it is such a big company that I imagine even employees have a challenge in keeping up to date with everything the organization is doing. I had EMC tagged as a "storage" company, specifically fibre-channel. That's a widely held perception by most folks in IT as well, so I feel comfortable admitting my ignorance. I was particularly impressed by the Replication Manager (RM) and backup technologies (Avarmar). I was also impressed with how pro-NAS and pro-ISCSI EMC are with the Celerra technologies. In fact, I was recently given access to both Clarrion/Celerra system in my labs, and of course, I was totally fixated by the fact I had a decent fibre-channel infrastructure (because for YEARS, I've lusted after one). Now I'm beginning to realise how important access to product quality NAS/iSCSI is going to be to me. Very few vendors are serious about supporting any protocol and any access mechanism, and there are cases were NAS outshines block storage (such as in VDI environment) and there are cases were block storage will outshine NAS when its coupled with VMware VMFS file system.
I got a lot of tips and tricks from Chad over the two days as well. Generally, Chad has got much better access to the developers behind VMware vSphere4 product than I do (or it least it feels that way sometimes!). So he can go directly to people who code stuff, and ask how does this work and why did you do it that way. So he did some really excellent myth busting. The biggest one for me is that VMFS extents are GOOD. That improves performance in most cases, and the Community should completely reconsider their position on them. I include myself on this. It was my understanding that VMFS extents were filled serially. That is to say if you had 10 LUNs in VMFS extent, vmkernel would fill LUN1 first, then LUN2 and so on, that a loss of any LUN would result in loss of the extent and as consequence data. NONE OF THAT IS TRUE. Here's the annoying thing: To some degree VMware's own documentation and courseware has been restating these myths for sometime. It is a case of where the technology has changed but the documentation has lagged behind. Folks like me have read the official docs, and repeat this warning as gospel, when in fact, its been wrong. I'm quite embarrassed by that personally. In a way I've been part of the process of promoting and disseminating incorrect information. Anyway, it was all done in good faith. Here's what happens: The 10 LUN extent files are created randomly. Totally randomly. Therefore the I/O of these VMs is distributed across the LUNs. That would be the smart thing to do wouldn't it? I should have really questioned the official docs because I'm used to VMware doing smart things, not dumb ones.
The other thing I learned this week was all the new plug-ins to VirtualCenter from EMC. To tell you the truth, I already knew what these were about and have just got round to writing about them. There are plug-ins for vCenter which allows for Celerra to automate the failback process (something I did blog about in March). There's also a plug-in for Celerra to automate the mass deployment of VMs for a VDI project. There's also a new Storage View plug-in which knits all the VMFS, LUNs, Paths and so on into one handy location in VC. If you an EMC customer you should be looking at these plug-ins today.
Finally, I think I understand why vSphere4 has this "Virtual DataCenter Operating System" tag. And, (gasp) beginning to understand the whole cloud thing. This came from having the advantages of the VCE (VMware,Cisco,EMC) alliance explained, and also the Cisco Unified Computing system explained. As you should know already, Cisco has entered the Blade and HBA market. The blades have the latest Intel processors, truck loads of memory and either 2x10G or 4x10G ports on them. These ports can be used for either storage or conventional network traffic. I will say that again. That's one NETWORK for both storage and conventional network traffic, with a fraction of the cabling you would normally require. Say goodbye to the jungle of cables you have at the back of a typical rack. They massively reduced and simplified the cabling required at the back of the system. Rather than being managed on an enclosure-by-enclosure basis, you can manage the blade, the network and storage from one management UI.
Now where does VMware fit into this? It's really on VMware to utilize this quantity of hardware properly because they work so closely with Cisco and EMC (no surprise there I guess!). We now have a new generation of hardware which is ultra-dense, squeezing VERY large amounts of CPU/Memory/Connectivity into a very small space. Only ESX can make this hardware usable in a meaningful way to the average guy in a server room. People have achieved this consolidation with software so far (VMware), now the hardware vendors have caught up by consolidating and simplifying the hardware.
I know what you thinking. Mike, this all marketing guff, and they are just putting the word "virtualization" in there to make you think something old is new.
I pride myself on being on being a total cynic and skeptic when it comes to all that stuff. I think the problem for us, is this: ALL the vendors are going to say "Our hardware is designed for virtualization." The claim is easier to make than it is to prove, unfortunately.
Secondly, if all this change scares the bejesus out of you, that's OK. It does me. The good thing is that its going to take some time for this new paradigm to get of the ground. Time means you will have plenty of time to learn about it. VCE represents a huge investment by the 3 companies in real "blue sky" thinking, so they are ploughing the R&D money into the datacenter of the future. Now, that doesn't mean its so faraway you don't have to think about it. Be aware that it is coming, and don't be scared. Hardware that ingrates tightly into VMware ESX and vCenter is GOOD THING!
Thirdly, the cloud. Right, I get it. This is what the cloud is about. Have you noticed that in this blog post I've not mentioned the word "Microsoft" one little bit. Where does MS fit in to this model above? because it does. It's this wee tiny thing called the "Guest Operating System" that runs inside a VM inside this MUCH BIGGER system called VMware running on hardware which is either Cisco UCS or whatever systems you have selected as a competitor. The storage backend could be EMC or could be someone else. The important thing for me is the network/blade vendor could change (Cisco, HP, IBM) and the storage vendors could change (EMC/HP/NetApp). What remains is VMware. And without VMware in place this new hardware architecture doesn't knit together. Forget about the great and good trying to define standards (rather like a bunch of "Central Committee" in some communist planned economy) which are outmoded before they are even delivered. What will drive standards (as ever) are companies like Intel, Cisco, EMC and VMware (filthy capitalist innovators and entrepreneurs) working together to make sure their R&D is recognised as a standard. In other words the propriety becomes an industry standard, and becomes a vendor neutral standard.
So why call this a cloud? Here's why: Microsoft doesn't know what to do with a cloud. It doesn't really know what one is (because no-one really, really does at the moment) and Microsoft doesn't have the depth of integration and co-operating that VMware has with Intel, Cisco, EMC and the other big players. Because you see Windows is just a guest operating system which wasn't designed for these macro-economies of scale workloads. OR, put more cynically. If VMware talks clouds, that wrong foots Microsoft. Microsoft wants to talks about hypervisors and management tools (that's so circa 2000's). VMware wants to talk about clouds (that's so 2010-2020).
Do you see? Cloud is a strategy designed to wrong foot Microsoft. You can see this just like virtualization in this decade. It took Microsoft YEARS to catch-up with virtualization (which they first initially poo-poo'd), and then came out which such utter bollocks like "you don't need VMotion" (whilst they now fumble clumsy to promote live migrate in R2 of HyperV). By the time Microsoft wakes up and smells the bacon, VMware will be the defacto standard in very very large high-density datacenters (aka the cloud).
So if you like, dismiss the cloud as "marketing". But remember this: Without a good marketing strategy, Microsoft WOULD WIN. In other words with out good marketing strategy the best product (VMware) would be destroyed by sub-standard products and technologies (Microsoft) simply by marketing alone. We all know we have seen this before with other market leaders.
So what's the future of MS if this happens? Much reduced. A significant player in the market just like IBM are now. Because the app that runs in the guest operating system inside the VM is still what the end-user connects to. But that is such a smaller piece of the bigger picture.
Now I understand why the CEO of VMware is an ex-Microsoft guy who ran a cloud computing start-up. Only someone like would understand how to beat Microsoft at their OWN game. Get Microsoft on ground where their marketing and sales reps feel uncomfortable, and they will squirm.