bahrialtay - Fotolia
Building a dynamic infrastructure around VMware has a number of complexities and components. One crucial element that is often overlooked is the human computer interaction. Problems can arise when developing and distributing software, so caution should be exercised. DevOps methodology allows for quicker updates and for companies to bring features to market faster than in a standard environment. So what exactly is DevOps and how does it apply to a classic VMware environment?
Finding balance between admins and developers
For starters, DevOps means different things to different people; the best way to think of it is as automation and infrastructure as code. Historically, VMware administrators have been kept separate from developers. Generally, admins would build the servers to which developers would upload code. Although they occasionally met in the middle, on the whole, the two groups worked independent of each other. This made it difficult to determine who was responsible for fixing issues and ensuring servers were running smoothly. Fortunately, with the advent of the cloud, the whole concept began to change.
With elastic scale and automated deployment, maintaining standardization across the environment is critical. The last thing you want in an elastic application is random changes to code or configuration files. Remember, it is generally easier to spin up a new instance than to try to repair an existing one. Managing complexity is key for administrators.
Blueprint designers are a sort of rudimentary step toward DevOps, as they create the services that allow developers and users to check out and use copies of a VM. If necessary, these copies are fed back to the designer to implement any desired changes. However, there is a certain disconnect between this process and genuine DevOps.
The DevOps approach to coding
DevOps methodology is similar to a multidisciplinary team whose members work closely together without silos or separation to get the code developed, tested and deployed.
On a code management level, there are DevOps style applications to help maintain desired states in servers. These applications include some well-known products, like Chef, Puppet and Vagrant, as well as many other similar systems. The purpose of these applications is to rein in common management, code development and versioning issues -- as well as file creep -- using the concept of "desired state." They can also be used to set up and configure the status while building the application. Templates are no longer the be-all and end-all, but rather a starting point on which to build.
Chef specifically enables admins to keep track of file versions as VMs report in order to ensure a successful configuration. At this point, an admin can take appropriate action.
From a testing and development perspective, there are several other tools, such as Docker and Vagrant, which can be used to ensure all code being worked on is the same. If an environment works in Docker, it will also work on any Docker host, be it Azure, Photon or several other available Docker hosts. Vagrant can pull the latest image and work on it. Once changes have been made, the latest Docker image can be installed on hosts going forward.
Obviously, quality assurance is critical for any code shop. DevOps methodology rolls the issue forward rather than hiding the code base. Small, frequent changes are the order of the day. The inability to deploy code is considered a level one severity incident.
Through the use of elastic scales and with a little knowledge, a good DevOps admin can deploy code to troubleshoot smaller, non-code issues.
Choosing the right setup in VMware environments
There are two different scenarios in a VMware environment. First, there is the classic vRealize setup with elastic design and self-service, which is suitable for small shops. The second is Docker, which is a better option for software as a service shops with a limited number of offerings.
DevOps methodology does have limitations. For one, it doesn't fit well into legacy environments. This is usually remedied by starting fresh -- however, it's easier said than done in long-deployed, stable environments with huge amounts of legacy hardware and software.
Bimodal and trimodal IT, which were a major topic of discussion at the 2015 national VMUG, helps resolve this issue by running elastic environments separately, rather than on a legacy platform. I, as perhaps as any other sane VMware admin, would not recommend even running a mixed VMware environment. Cloud needs its own hardware, and not making the proper arrangements can create trouble down the road.