Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

How will vSphere admins handle container virtualization?

The latest wave of virtualization is poised to sweep through data centers, but questions swirl around the ownership and management of these applications.

I think most enterprise IT people looking toward the future know container virtualization is coming. What does...

this mean to the vSphere administrators operating a data center? Is this the end of the reign of data center virtualization or just business as usual? Containers are a tool for running applications that may end up inside VMs. Many data centers will need to adapt operations to support applications delivered in containers. This is definitely an area of fast change and many questions remain to be answered.

A question of visibility

While container virtualization isn't new or limited to Linux, the current wave of developer-friendly containers are all based on Linux. This means that what a vSphere administrator will first see is a Linux VM requested by the developers on a new project. More likely there will be several Linux VMs and they will probably be quite large. Hopefully, container virtualization will start with VMs in a nonproduction environment -- and the developers will let you know what is going inside those VMs. Inside the Linux VMs will be a container runtime and a series of container instances. The container instance is the piece of software that makes up part of the application. Multiple applications could have container instances on each VM. This application load might be distributed among container hosts by tools such as Google's Kubernetes. The vSphere administrator will probably not have any visibility of containers inside the VMs and have no way to know what applications have instances on each VM. Also, the container instances are expected to be relatively short-lived and disposable. So a container host VM could run different applications at different times. Today, that container could run part of the customer relationship management and payroll applications today. Tomorrow, it could run the public website and enterprise resource planning applications.

Another possibility is the project uses physical servers to install Linux on bare metal to run containers. There is a strong argument for not using VMs to run containers. This is primarily around the fact that containers virtualize an operating system, so there is no need to virtualize the hardware as well. This is probably a good approach in "cloud scale" deployments where the applications have thousands of container instances. In more enterprise-scale deployments, there will be fewer instances. So the flexibility of combining containers and VMs is more valuable.

The key characteristic of container virtualization is the ability to scale-out workloads. There are often a large number of containers working together to make up one application. Application load is spread across many container hosts and they may all get busy at the same time. Some applications will also auto scale, increasing the number of instances in response to load. Containers are fast to start and designed to be disposable. Scaling out and back in can be very rapid. The net result can be a much more uneven workload with load surges and lulls. In a loaded vSphere environment, this surging can cause temporary resource contention. Container orchestration tools may not be VM aware. They may try to create further container instances when the underlying physical resources are already saturated.

Who owns the container?

One question is whether vSphere administrators should care about container-based workloads. After all, a container is just another way to deploy an application. Are vSphere administrators responsible for the applications inside their existing VMs? If not, then they shouldn't be responsible for the containers. VMware's vision is very different. Its expectation is that the vSphere administrators will need visibility and manageability to individual container instances.

One approach VMware is working on is to make VMs as fast and easy to create as a container instance. This has been called Project Fargo but is now referred to as Instant Clone. This feature allows a new VM to be created from a running VM in under a second. The two VMs use copy-on-write techniques for disk and RAM to minimize overhead. The aim is to make VMs that each run a single container instance, allowing any VM management tool to understand containers. VMware's Project Bonneville even allows this process to be driven using the Docker API. With Bonneville, developers can deploy applications onto ESXi servers using Docker commands. The same configuration files and commands they use in development can be used to deploy to production. Then the vSphere administrators can see each container instance as a VM. Normal VM management tools understand little about containers.

It remains to be seen whether container virtualization and running containers inside VMs will be commonplace. There are many parts to VMware's plans for "modern applications" in the data center. Administrators in charge of vSphere environments should pay attention to the development of technology enabling better visibility for containers running on their platforms.

Next Steps

Container virtualization is increasing efficiency

How VMware containers work

Docker bringing greater scalability to container virtualization

This was last published in August 2015

Dig Deeper on VMware new releases and updates

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Are you preparing to handle container virtualization in your vSphere environment?
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close