This is part two of a three-part series regarding work we performed at CWDS and is focused on container, and more specifically our experience with Docker containers.
Note: This is part two of a three-part series regarding work we performed at CWDS, and more specifically our experience with code checking thru automation.
With the SOMS M&O win, complete with a signed contract, we can finally take a break and relax for a moment…
Okay, moment’s up!
Now we’re starting the real work. We will be sifting through documentation and attending knowledge transfer sessions explaining what the outgoing M&O vendor has done so we can recreate it in a new, modernized environment. This includes requesting new VMs from the other contractor. We will architect the new solution – assess everything within the old workloads/VMs (i.e. above the OS,) and then figure out how to setup and configure the new environments. The challenge, and opportunity, is that many of the IT tools used on SOMS are past EOL. Red River gets to define the ‘to-be’ modernized world. Thankfully, for the CWDS project, we deployed technologies that eliminated this exact problem, and the DevOps team was able to use it effectively.
Based on my previous post, you probably think I’m talking about Docker containers and this would make you partly right. As in the traditional IT (physical hardware) world, most workloads required more than one component. In our architecture, each container holds one application component, complete with whatever libraries and supporting tools are needed. However, there’s more to our solution – and why this can only be part of the solution. Docker itself is not part of the operating system (OS). Therefore, before you can run a Docker container, someone needs to install Docker.
Note: Just a quick reminder, modern applications are built in modular or component fashion, often consisting of lots of smaller (software) parts.
To deliver our modernized solution, we used two popular pieces of automation technology: Ansible and Jenkins. First, let me answer, “What is Ansible?” Ansible is a configuration management (CM) tool and it provides the ability to write a “playbook” which is a cross between a script and installation specifications. When a playbook is run, Ansible goes through the steps one by one (like a script) to see if an appropriate version of the specified element has been installed and configured as defined in the step (like installation specifications). If it has, Ansible goes to the next step. If not, Ansible performs whatever installation and configuration is needed. Similarly, “What is Jenkins?” Jenkins is a continuous integration (CI) tool and it orchestrates processes like running Ansible playbooks in response to certain events like checking code into the master branch or manually clicking a button. So, in summary, to deliver our robust, container and Docker environment, we used Ansible to create a series of playbooks, and Jenkins to coordinate the automation and timing.
Note: For those of you who like more information: Modern source code repositories usually support the creation and maintenance of multiple “branches” or versions of the source code that can be used for different purposes and merged together. The “master branch” is the copy that should only have code that works properly in it.
To make this all work together – from developer to operations (Build/Run) it requires a combination of these technologies. For us, these technologies allowed the DevOps team to wire up our CI tool so that, once a developer believed that the code he or she was working on was ready and merged it into the master branch, the tool would build Docker images and test them.
If all tests passed, the images would be pushed to DockerHub and the tool would perform the deployment to the target environment. Since the target environment could be brand new, the deployment would run CM scripts to make sure the VMs were set up correctly, install and start the container from DockerHub, and notify staff that the CI process had completed. A DevOps engineer, developer, or even manager could manually run the deployment process. Simply put, this means that shortly after a developer believed a feature had been completed, it is available for final approval, and then ready to be pushed to test, production or any other environment. It also means that the environmental needs of the application were previously documented in the CM and CI scripts as part of developing the automation – the scripts themselves are also version controlled.
What we’ve developed at CWDS can be reused on the new SOMS M&O project. We can and will use CM and CI technologies to define and build environments, as well as to perform the deployments of the custom applications to help modernize their IT services.
Next month, I’ll talk about some aspects of extreme programming (XP).
Published: 08/07/2019