OpenStack is the worldwide standard for cloud computing and attracts many vendors, including ones that have their...
own cloud offering like VMware. VMware developers are looking at how OpenStack can help them integrate VMware vSphere with the cloud.
The announcement that VMware would be adopting OpenStack as an alternative cloud controller was made at VMworld 2015 in San Francisco. Currently, developers around the world are working on making this happen. While some connections are easy to make, other aspects aren't as easy to integrate.
With OpenStack and vSphere integration, interesting issues arise because VMware and OpenStack treat everything from storage to files differently. For example, the storing of virtual machine disk files differs in each system.
In OpenStack, storage needs to be configured in the most flexible way. That is why OpenStack, apart from an image store and a block store, uses object storage as well. In object storage, the hypervisor host connects to the RESTful API to determine where and how files like virtual disk files are stored. The storage cluster makes sure that the file is stored in a redundant and flexible way. It's essential to remember that in this approach, the direct connection between the file (in the case of a VMware virtual machine, this would be a virtual disk file) and its user (the vSphere environment in the case of VMware) is completely lost, and access to the storage goes through the RESTful API.
In OpenStack, two options are commonly are used for storage: Swift and Ceph. Storage access in Swift goes through the Swift proxy, which is accessed through the RESTful API. Storage access in Ceph is a bit more flexible. Apart from API access, there is file system access as well as access through the Rados Block Device (RBD). A specific block device is needed on the node to access the file.
To make this approach for file access compatible with file access in VMware is not that easy. In VMware, disk files are VMDK files that are stored on a VMFS file system and an abstraction layer to store files through an object store is simply not available. That means that another approach is needed to store VMDK files in the object store.
The starting point of VMDK storage in current versions of vSphere is the VMFS file system. In OpenStack, the hypervisor nodes access images of instances through the RESTful API. To access files on VMFS, the ESXi nodes need to be configured to be able to access the storage environment, which happens through traditional SAN access, such as iSCSI or Fiber Channel. These don't allow for storage access through the RESTful API, which is the typical option in OpenStack.
That leaves VMware developers with two different choices: either enable storage access through the RESTful API in vSphere or use the RDB. In the Linux kernel, the RBD allows the user to access Ceph-based storage through a regular block device. By using the RBD, the Ceph infrastructure can be abstracted, and the client node (in this case the vSphere hypervisor) wouldn't have to know anything about the underlying architecture.
OpenStack and vSphere integration does matter for VMware, and will lead to an environment where it will be easier to create large hybrid clouds that allow for migration of instances from one hypervisor platform to another hypervisor platform. As almost all major virtualization vendors are working on cloud integration, VMware couldn't stay behind. The exact result is still to be determined. One thing is sure though, it will change the way administrators are using VMware environments in the cloud.