I thought some people might benefit from a recent question I received by email. Here’s the question:
I have picked up a lot of great information from your website and it has helped me over the past couple of years a great deal. I have a quesiton and was wondering if you plan on doing anything with a kickstart installation and vSphere 4 there have been some changes to the way the script is structured and using vm manual is quite daunting at times. My main problem seems to be configuring the hard disks.
Here is another question you might be able to answer for us we have been trying to understand v4. The question is this when I am at the console and I do a alt f1 and log on as root am I on the physical server or am I in the esxconsole and if we are in the console how am I able to see the vmdk file which controls the esxconsole. The follow up to this is if I am not in the esxconsole when I log on then what is the esxconsole and what is it use for..
You do not have to spend a lot of time explaining this you can be brief like hay dummys here is what you need to know oh and we have read the manual.
Thank you sir your time is greatly appreciated.
This is my response:
You won’t be surprised to hear that during the beta programme information from VMware on scripted installations was decidedly thin on the ground. I worked with very closely with the creators of the EDA and UDA install appliances to get both PXE booting, and scripted installations working. I was lucky to have contacts in VMware Education at the time of the ESX4 beta, who were using kickstart to build their own lab environments to squeeze a little bit of information out them that help. But much was achieved by research – in other words repeatedly retrying scripted installations – until they worked. It’s not the way I like to learn technology myself, but if a vendor is unforthcoming then I won’t let that stop me. You quite right to indicate that whilst the documentation has improved, it still isn’t comprehensive. Quite how VMware imagine folks are going to deploy 100’s of ESX hosts without being more forthcoming is anyone’s guess. It has always amazed me what little effort VMware puts into assisting people in deploying ESX – perhaps they leave that for their PSO work. Nice.
Anyway, just to clarify. The service console now uses a virtual disk. Logically this means that a VMFS volume must be selected or created during the installation, prior to the creation of the partition tables for the ESX host. That VMFS volume must be big enough to hold the esxconsole.vmdk file, and itself must be large enough to hold the partitions within it. Not everything about ESX is held in the .VMDK.
There are some system partitions which can only be custom created by kickstart – in other words they don't appear in the standard GUI installation. So if a GUI installation is done you just get these standard partitions. These partitions reside OUTSIDE of the virtual disk
If you do standard installation, the partitions that reside inside the virtual disk are:
Historically, its this partition table that we have “customized”. I was somewhat unsure whether folks would want to do that – in the end I decided that most would want to create a custom partition scheme in the .vmdk file. So I also invested sometime working out how to do it with kickstart scripts. This new disk structure does introduce some interesting experiences at the Service Console CLI. Say if you run the ‘mount’ command on HP Proliant.
/dev/sdh8 on / type ext3 (rw)
None on /proc type proc (rw)
None on /sys type sysfs (rw)
None on /dev/pts type devpts (rw)
/dev/cciss/c0d0p1 on /boot type ext3 (rw)
/dev/sdh5 on /home type ext3 (rw)
/dev/sdh7 on /opt type ext3 (rw)
/dev/sdh6 on /tmp type ext3 (rw)
/dev/sdh1 on /var/log type ext3 (rw,errors=panic)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
Normally, in ESX3 /dev/sdh5 would have been /dev/cciss/c0d0pN. But as its now a virtual disk you see the more common /dev/sdN syntax… I guess what must be happening is this. That the vmkernel loads, then loads the VMFS Driver – which then allows for the vmkernel to mount the VMDK file, and then access the remainder of the system. The boot loader is grub as you know, and the boot strap environment appears to be the same “busybox” environment used to load ESXi. Again, there is really NO architecture information on how this done. Their might have been a VMWorld session on this – but I wouldn’t know and to be honest I really don’t care too much how it is done so long as it works!
Anyway, I’m hoping that clarifies what’s going on here…. Now to the real meat – scripted installations…
I’ve taken the liberty of attaching my sample KS script to this email. It’s very similar to the one on RTFM associated with the UDA… Here’s the part that does my custom partition scheme. I’ve put comments in to explain what I am doing – hopefully this will clarify things for you. The UDA uses “variables” to hold common parameters like hostname, IP and so on. I’ve removed these [VARIABLES] with the plain text to make it look more like a bog standard kickstart file.
# Clear Partitions
clearpart –drives=/dev/cciss/c0d0 –overwritevmfs
The clearpart as you might know – destroys partitions. Quite a dangerous command. But if I was doing a re-install and want the install to completely wipe my installation – it would fail without the use of new parameter introduced in ESX4.x called –overwritevmfs. This destroys the partition scheme on /dev/cciss/c0d0… including the VMFS volume that holds the esxconsole.vmdk….
# BootLoader ( The user has to use grub by default )
bootloader –location=mbr –driveorder=/dev/cciss/c0d0
Driver order sets where the MBR record goes to – without this script installations can stall.
# Manual Paritioning
part /boot –fstype=ext3 –size=250 –ondisk=/dev/cciss/c0d0
part None –fstype=vmkcore –size=100 –ondisk=/dev/cciss/c0d0
part esx1_local –fstype=vmfs3 –size=20000 –ondisk=/dev/cciss/c0d0 –grow
So here I’m creating the 3 partitions that make up the physical partition scheme. All quite straight foreward in fact my settings here are more less exactly the SAME as what you would get with a manual installation – but I need them in script and is good know they can be change. The odd one is the last one – where I same make the vmfs volume 20000MB, followed by –grow. It appear to be the case that a positive integer must be supplied, and that this number is check against the total partition space defined below – EVEN though the grow means use the remainder of the disk. As word of caution logically only ONE partition can be designated to grow – for obvious reasons!
virtualdisk vd1 –size=15000 –onvmfs=esx1_local
part swap –fstype=swap –size=1600 –onvirtualdisk=vd1
part /opt –fstype=ext3 –size=2048 –onvirtualdisk=vd1
part /tmp –fstype=ext3 –size=2048 –onvirtualdisk=vd1
part /home –fstype=ext3 –size=2048 –onvirtualdisk=vd1
part / –fstype=ext3 –size=5120 –onvirtualdisk=vd1 –grow
VD1 says to create the 1st virtual disk. It is theoretically possible to give create multiple virtual disks – I just don’t see the point in doing so. I create a virtual disks which is 15GB in size. I then using the part command create the partitions scheme. This does mean there is free space available in the VMFS for local virtual machines if I’m so inclined. The partitions actually add up to about 12.6GB. As for the scheme itself I’m more or less creating the same partition scheme in the virtual disk, as I would have done on the physical disk in ESX 3/2. Although the partition scheme is 12.6GB in totally, I actually let the / volume grow – so once swap, opt, tmp, and home have been created / then uses the remainder of the disk.
Finally, there will be many people who have different ideas about how the ESX console should be partitioned, as there stars in the firmament. What interests me is the technology that allows that to happens – and how it is done. I will leave it to you to make those decision…