Manage Learn to apply best practices and optimize your operations.

What I learned last week

A quick rundown of things that were discovered in the last week, including RDMs and the 2TB Limit and some nice bits of PowerShell.

I don’t know everything about VMware (sic). What I like to do during my training course is admit that EVERY DAY I learn something new about VMware which I’ve never come across before. It goes to remind everyone that old dogs like me do learn new tricks everyday. So with that spirit in mind I want to try every week tell you what I learned about VMware that week. You know the kind of thing like tidbits that make you go “ahhhhh, I didn’t know that”.

RDMs and the 2TB Limit:

OK, I’m ashamed to say that I didn’t know that RDMs like virtual disks (.vmdks) are limited to being 2TB in size. I assumed that ESX4 had smashed thru this limitation. But no, it's still there. To be more accurate, the limitation is 2TB – 512B = 4294967295 blocks (512B per block). The limitation doesn’t come from VMFS but from the fact that VMware Logical Volume Manager still uses CHS (cyclinders-heads-sectors as the method enumerating LUN size, rather than GPT (GUID Partition Table). Despite the fact that the GPS method is supported in Windows (since Windows 2003 SP1), it is not supported by VMware. Of course, there are work-arounds if you do need >2TB but the restriction remains…

Some Nice Bits of PowerShell:

While helping some people on the forums I came across some sweet pieces of powershell. I’m collecting sample of common tasks so I can call them up from a personal library in my lab environment. So here’s what I found last week:

Find VMs on Local Storage:

Get-Datastore |where {$_.Name -match “store|local|storage”} |Get-VM |Get-HardDisk |select Filename |Export-Csv c:\LocalVMs.csv

Force an ESX into Maintenance Mode:

Get-VMHost -Name | Set-VMHost -State maintenance

List all VMs with their IP Address:

Get-VM | select name, @{Name=”IP”; expression={foreach($nic in (Get-View $_.ID) {$nic.ipAddress}}}

vApps on a stand-alone ESX host:

In case you don’t know, vApps in vSphere4 is a glorified resource pool – which allows you to gather a bunch of related VMs into a single object (called the vApp). From there you can set resource settings (like resource pool) but also do funky stuff like start-up/power-down orders and different methods of allocating IP addresses. NOW, if you create a vApps on a stand-alone host, and then subsequently try to add to a DRS enabled cluster, you will have a problem (see graphic below) as the vApp is destroyed during the process. Moral of the story? Create VMware Cluster first, then create vApps…

Manual invention required in Update Manager if vCenter4/VUM is run in a VM:

I’m a big advocate of running vCenter/VUM and other VMware Infrastructure components in a virtual machine. In fact, I’m often horrified to discover that people physicalize those roles. Anyway, I don’t want to get into that debate – instead flag up anomaly in VUM in vCenter4. If you go to do a remediate on the ESX host, which is running the virtualized vCenter4/VUM, you will see the error below. To resolve it, you must manually move the VMs to different ESX hosts in the cluster. Normally, I run my virtual infrastructure components in a different ESX host (esx4) in a different environment to esx1,esx2 and esx3. But last week, to keep everything up and running and patched to the SAME level, I joined ESX4 to the DRS enabled cluster and so came the warning:

Dig Deeper on Scripting administrative tasks

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.