Today I did some work on accessing NFS “shares” from ESX3.0/VirtualCenter2.0. It’s possible to now use the Portmap/NFS...
network storage for “datastores” within VirtualCenter. This can be used for ISO/FLP files and perhaps, for development purposes, virtual disks.
It took me a while to get things going (with my Linux skills not being what they should be). I also found the release notes were useful too. https://www.vmware.com/products/beta/esx-vc/releasenotes_esx_vc.html#nas
First, build your Linux file server. I used Redhat Linux Advanced Server 3.0. I used this edition because it supports NFS using TCP and is based on version 3 of NFS, which is a requirement.
Logon to your NFS file server and edit the /etc/exports file like so: /test 192.168.2.101(rw,no_root_squash) Note: This allows the server 192.168.2.101 to access the mount/volume called /test. RW gives this server read and write access. The default is that non-root users do not get full access to the volume. The command “no_root_squash” allows applications like VirtualCenter read/write access to the volume.
- Then edit the /etc/hosts.allow file, adding the line:portmap: 192.168.2.101
Note: This allows the server 192.168.2.101 access to the portmap service.
- (Re)start the portmap and NFS services with:service portmap restart
service nfs restart
The NFS Server is now ready for use. Now create a VMkernel enabled switch, unless you have done so already.
- Next, Login with the VI client.
- Choose the ESX Host from the list and select Configuration Tab in the Hardware pane, and Choose Networking.
- Click Add Networking
- Choose (c) VMkernel
- Change the Network Label to be something meaningful like, IP_Storage
- Under the IP Settings, type in a valid IP Address and Subnet Mask (also set the Default Gateway, if you have not done so already)
Next, add in the NFS Storage as a datastore
- Next, login with the VI client.
- Choose the ESX Host from the list, and select Configuration Tab
- In the Hardware pane, select Storage (SCSI, SAN, NFS) in the right-hand side of the VI Client and click Add
- In the Wizard, choose (c) Network File System
In the “Locate Network File System” page, complete the dialog as follows: Server: Name of your NFS server, in my case NFS1. (Confirm you have name resolution and IP address working as well)
Folder: Name of mount/volume you wish to access in my case, /test
DataStore Name: nfs-test (or anything you deem suitable)
It can take some time for the NFS mount to appear.
For those of you who prefer the command-line and extra options beyond the GUI, take a look at esxcfg-nas –? for other options.
Note: Other issues from the Release Notes…
NFS Mounts are Restricted to 8 by Default
Default configuration only allows for 8 NFS mounts per ESX Server. Workaround: To mount more than eight NFS mounts on an ESX Server host, start the VI Client, select the host from the inventory, and click Advanced Settings on the Configuration tab. In the Advanced Settings dialog box, select NFS and set Net.TcplpHeapSize to 30 and NFS.MaxVolumes to 32. Then, reboot the ESX server host. These settings enable up to 32 mounts on the ESX server host.
Configured NFS Datastore does not Appear in the VI Client after a Reboot of the ESX server host
After a reboot, a datastore that was previously configured in the UI may no longer be visible. When the ESX Server host boots, it attempts to re-mount the existing datastores. But if the mount attempt fails because the server is unavailable, the operation is not retried. Workaround: from the service console, run esxcfag-nas -r. Then, restart the vmware-hostd agent.