Get started Bring yourself up to speed with our introductory content.

Container technology continues to gain ground with VMware

Containers have been one of the hottest topics in all of 2015. There are some drawbacks but is it enough to slow down the technology's momentum?

Containers have been a hot topic in the past year and it's hard to avoid the sense that this is the wave of the future for multi-tenant virtual systems. The main attraction of using containers comes from a limitation with traditional virtualization. When a virtual instance is created traditionally, a copy of the OS and app stack have to be included, taking up extra memory space, as in the case of virtual desktops. Loading that memory also costs network time.

A better option would be to have just one copy of the desktop OS and one copy of the common apps. The same is true of the LAMP stack that's on so many public cloud servers. The only variation is in the small files sitting on top of the stack that define the webpage, among other things. Containers address this problem head on with a single instance of Linux or Windows in the server. The app code also is single-imaged and containers only have the tenant specific apps and data. The savings in memory allow more instances to be provided by each server, so the overall price of computing goes down by a large factor, perhaps as much as 50%.

In the VMware universe, containers pose a threat to the well-tried traditional hypervisor-based virtualization method. First, the Docker technology is now included in Red Hat RHEL 7 releases, effectively mainstreaming containers in the Linux world. This will polarize the market, with companies that are heavily invested in VMware already likely to stay with a company they understand and trust.

For new installations, and especially those with high open source content such as OpenStack-based clouds, Docker and other container offerings are likely to be very attractive. This is underlined by the level of support Google and other major CSPs are giving to containers, with their proprietary orchestration models having no ties back to VMware.

As in other areas, VMware has followed the rule of keeping enemies closer, and has created a concept of running containers on top of hypervisors. In VMware's opinion, this combines the efficiency and performance of containers with the VMware ecosystem, but there are likely performance questions of layering up these offerings when compared with directly mounted containers.

We should have a better handle on the performance questions by the end of 2015. Meanwhile, VMware is pushing to support Docker in vSphere, Fusion and vCloud Air. Kubernetes, Google's container management system, is also ported to vSphere while Pivotal Cloud Foundry is a vehicle for automated deployment of containers.

With containers saving space, reducing network traffic and being easier to maintain, the sharing of the market will ultimately be decided by cost and ease of deployment versus customer loyalty. VMware isn't cheap when fully deployed, but retraining comes with a price and there are risks with any new core software. If a bare-bones platform approach is compelling enough to cause VMware customers to transition to pure containers, VMware will suffer. Otherwise their piggyback solution will hold on to share.

Container drawbacks

Are there negatives to containers? Well, first there is some loss of flexibility. Since the containers run on top of an operating system (OS), any server has to be either all Linux or all Windows. This isn't a major impediment in typical clouds, since a server can be converted back and forth between Linux and Windows with a reboot, allowing response to macroscopic workload changes.

As of today, the OSes that support containers are still relatively a short list, but this will evolve over the next year. Because there is only one operating system in the server, standard drivers can be used, opening up configuration options considerably, and allowing certification of the stack and the system to proceed rapidly.

A bigger concern with containers is security. Multi-tenancy built around hypervisors, with hardware support, is well understood. A single OS has to rely on software to separate users, at least in the short term, and it will be a while before the security of multi-tenancy reaches the levels of traditional virtualization. The sharing of software in containers may also make them more crash-prone. Hundreds of apps running in parallel may interact with each other, and one app crashing may bring all the containers in a server to a halt. With instance storage becoming common in virtual machines, the container system will have to cope with proper erasure and tenant lockout on what amounts to a drive local to the OS underlying the containers.

These are all problems that can be overcome in time. The VMware approach in fact allows well-tried hypervisor security to encapsulate the containers, though that's only valuable if there are multiple containers per instance. In the meantime, containers offer tremendous advantages for use cases with no critical data to protect, or have many instances that are identical, such as LAMP. Proper orchestration can control the instance storage tear down problem, which opens up things even more.

Orchestration is addressing the networking needs of containers. For example, OpenStack has the libchan and libswarm projects that will allow network abstraction in a software-defined network environment. In fact, there is a push to run OpenStack data services in containers.

Containers will co-exist with traditional hypervisor virtualization for a good while, as the industry begins to understand the tuning and security implications, and companies adjust their hardware and software to best take advantage of the new approach. Longer term, it seems likely that the only impediment to containers completely replacing hypervisors is the limitation to a common operating system, which is a minor operational issue for clouds to overcome.

VMware users may realize the value of using the piggyback approach while using the VMware ecosystem, which will evolve to better handle containers, but even here, the long-term issue is whether that ecosystem will morph to directly drive containers without hypervisor virtualization. That might be the best of all worlds for current VMware users.

The benefits of containers are becoming clear, and 2015 is the year to begin sandboxing and understanding where they fit in in any given situation and with an aim to be running containers in production by early 2016, if not earlier.

Next Steps

Tools and techniques to scale container technology

Container technology competition is heating up

How containers fit in the cloud?

The cost of running AWS instances vs. Docker containers

This was last published in September 2015

Essential Guide

The complete rundown on Docker data storage and containers

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you feel VMware's container strategy stacks up?
Containers are yielding 3-5x higher workloads (more efficient than VMs), and are adopted particularly by developers and operation for agile cloud-native implementations. VMs will continue to host older and legacy workloads. Security concerns will diminish, just as they did on VMs when they first were introduced. Perhaps the most interesting aspect is how Docker is the first cross platform API that is supported by Microsoft, AWS,Google, and Red Hat, and opens the door to combined Windows + Linux clouds built on Docker compatible tooling. This is the exciting aspect, and could be a game changer in the coming years.