Using ADOP and Docker to Learn Ansible

As I have written here, the DevOps Platform (aka ADOP) is an integration of open source tools that is designed to provide the tooling capability required for Continuous Delivery.  Through the concept of cartridges (plugins) ADOP also makes it very easy to re-use automation.

In this blog I will describe an ADOP Cartridge that I created as an easy way to experiment with Ansible.  Of course there are many other ways of experimenting with Ansible such as using Vagrant.  I chose to create an ADOP cartridge because ADOP is so easy to provision and predictable.  If you have an ADOP instance running you will be able to experience Ansible doing various interesting things in under 15 minutes.

To try this for yourself:

  1. Spin up and ADOP instance
  2. Load the Ansible 101 Cartridge (instructions)
  3. Run the jobs one-by-one and in each case read the console output.
  4. Re-run the jobs with different input parameters.

To anyone only loosely familiar with ADOP, Docker and Ansible, I recognise that this blog could be hard to follow so here is a quick diagram of what is going on.

docker-ansible

The Jenkins Jobs in the Cartridge

The jobs do the following things:

As the name suggests, this job just demonstrates how to install Ansible on Centos.  It installs Ansible in a Docker container in order to keep things simple and easy to clean up.  Having build a Docker image with Ansible installed, it tests the image just by running inside the container.

$ ansible --version

2_Run_Example_Adhoc_Commands

This job is a lot more interesting than the previous.  As the name suggests, the job is designed to run some adhoc Ansible commands (which is one of the first things you’ll do when learning Ansible).

Since the purpose of Ansible is infrastructure automation we first need to set up and environment to run commands against.  My idea was to set up an environment of Docker containers pretending to be servers.  In real life I don’t think we would ever want Ansible configuring running Docker containers (we normally want Docker containers to be immutable and certainly don’t want them to have ssh access enabled).  However I felt it a quick way to get started and create something repeatable and disposable.

The environment created resembles the diagram above.  As you can see we create two Docker containers (acting as servers) calling themselves web-node and one calling it’s self db-node.  The images already contain a public key (the same one vagrant uses actually) so that they can be ssh’d to (once again not good practice with Docker containers, but needed so that we can treat them like servers and use Ansible).  We then use an image which we refer to as the Ansible Control Container.  We create this image by installing Ansible installation and adding a Ansible hosts file that tells Ansible how to connect to the db and web “nodes” using the same key mentioned above.

With the environment in place the job runs the following ad hoc Ansible commands:

  1. ping all web nodes using the Ansible ping module: ansible web -m ping
  2. gather facts about the db node using the Ansible setup module: ansible db -m setup
  3. add a user to all web servers using the Ansible user module:  ansible web -b -m user -a “name=johnd comment=”John Doe” uid=1040″

By running the job and reading the console output you can see Ansible in action and then update the job to learn more.

3_Run_Your_Adhoc_Command

This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of having the hard-coded ad hoc Ansible commands listed above, it allows you to enter your own commands when running the job.  By default it pings all nodes:

ansible all -m ping

4_Run_A_Playbook

This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of passing in an ad hoc Ansible command, it lets you pass in an Ansible playbook to also run against the nodes.  By default the playbook that gets run installs Apache on the web nodes and PostgreSQL on the db node.  Of course you can change this to run any playbook you like so long as it is set to run on a host expression that matches: web-node-1, web-node-2, and/or db-node (or “all”).

How the jobs 2-4 work

To understand exactly how jobs 2-4 work, the code is reasonably well commented and should be fairly readable.  However, at a high-level the following steps are run:

  1. Create the Ansible inventory (hosts) file that our Ansible Control Container will need so that it can connect (ssh) to our db and web “nodes” to control them.
  2. Build the Docker image for our Ansible Control Container (install Ansible like the first Jenkins job, and then add the inventory file)
  3. Create a Docker network for our pretend server containers and our Ansible Control container to all run on.
  4. Create a docker-compose file for our pretend servers environment
  5. Use docker-compose to create our pretend servers environment
  6. Run the Ansible Control Container mounting in the Jenkins workspace if we want to run a local playbook file or if not just running the ad hoc Ansible command.

Conclusion

I hope this has been a useful read and has clarified a few things about Ansible, ADOP and Docker.  If you find this useful please star the GitHub repo and or share a pull request!

Bonus: here is an ADOP Platform Extension for Ansible Tower.

Advertisements

Running the DevOps Platform on Microsoft Azure

As per my last post about GCE sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on the Microsoft Azure cloud.  Warning thanks to Docker Machine, this post is very similar to this earlier one.

1. I needed an Azure account.

2. I logged into my Azure account and didn’t click “view the new Portal”.

3. On the left hand menu, I scrolled down to the bottom (it didn’t look immediately to me like it will scroll so hover) and clicked settings.  Here I was able to see my subscription ID and copy it.

4. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver azure --azure-size Standard_A3 --azure-subscription-id <the ID I just copied> markos01

I was prompted to open a url in my brower, enter a confirmation code, and then login with my Azure credentials.  Credit to Microsoft, this was easier than GCE for which I needed to install the gcloud commandline utility!

You will notice that this is fairly standard.  I picked an Standard_A3 machine type which is roughly equivalent to what we use for AWS and GCP.

5. I waited while a machine was created in Azure containing Docker

6. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

7. I ran the normal startup.sh command as follows:

$ ./startup.sh -m markos01 -c NA

And entered a user name (thanks to this recent enhancement), hey presto

...
SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval \"$(docker-machine env $MACHINE_NAME)\"
source env.config.sh
Navigate to http://52.160.97.159 in your browser to use your new DevOps Platform!

And just to prove it:

$ whois 52.160.97.159 | grep Org
Organization: Microsoft Corporation (MSFT)
OrgName: Microsoft Corporation
OrgId: MSFT

8. I had to go to All resources > markos01-firewall > Inbound security rules and added a rule to allow HTTP to my server on port 80.

9. I viewed my new ADOP on Azure hosted instance in (of course…) Chrome! 😉

More lovely stuff!

 

Running the DevOps Platform on Google Compute Engine

Sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on Google Compute Engine (GCE).

1. I needed a Google Compute Engine account.

2. I enabled the Google Compute APIs for my GCE account

3. I installed the Google Cloud commandline API

4. I opened the Google Cloud SDK Shell link that had appeared in my Windows Start menu and ran:

C:\> gcloud auth login

This popped open a Chrome window and asked me to authenticate against my GCE account.

5. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver google \
                 --google-project <a project in my GCE account> \
                 --google-machine-type n1-standard-2 \
                 markosadop01

You will notice that this is fairly standard.  I picked an n1-standard-2 machine type which is roughly equivalent to what we use for AWS.

6. I waited while a machine was created in Google containing Docker

7. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

8. I ran the normal startup.sh command as follows:

$ git clone https://github.com/Accenture/adop-docker-compose
$ ./startup.sh -m markosadop01 -c NA

And hey presto:

...
SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval "$(docker-machine env $MACHINE_NAME)"
source env.config.sh
Navigate to http://104.197.235.64 in your browser to use your new DevOps Platform!

And just to prove it:

$ whois 104.197.235.64 | grep Org
Registrant Organization: Google Inc.
Admin Organization: Google Inc.
Tech Organization: Google Inc.

9. I had to go to Networks > Firewall rules and added a rule to allow HTTP to my server.

10. I viewed my new ADOP on Google instance in (of course…) Chrome!

Lovely stuff!

Practical Benefits of Continuous Delivery

As I’ve discussed previously, I believe that Continuous Delivery is a practice that no software delivery organization should ignore.

Fundamental to implementing continuous delivery is building a delivery pipeline.  I drew up the following diagram of a delivery pipeline because I’ve found it useful for articulating the practical benefits of Continuous Delivery and highlighting where they are realized in the software delivery lifecycle.

I’m posting here in case it is of use to others (click to enlarge).

Benefits of a delivery pipeline

Here are the benefits stated in the diagram in text form:

  1. Immediate start after check-in: no wasted time
  2. One new change per new pipeline: transparent debugging
  3. Parallel execution: faster feedback
  4. All stages e.g. code analysis used as enforceable gates: easy to enforce quality controls
  5. Can include infrastructure / environment build as well as application deployment: predictable and consistent behaviour
  6. Visible project status: easy to understand current stage of delivery execution
  7. If a stage fails, the committer of the change can be immediately notified: efficient communication
  8. Fully automated: predictable outcomes and minimised manual effort
  9. Consistently executed automated test harness: high visibility of code quality and automated test stability
  10. Easy to drill down to cause of failure: faster debugging
  11. Highly visible historic information: can extract trends which inform planning decisions
  12. Tested build package re-used: predictable and consistent behaviour
  13. Environments are recreated from version control so no need to limit: efficient debugging
  14. Infrastructure resources recycled: efficient use of cloud services
  15. Some stages may only be triggered manually: compatible with release management approval processes
  16. The pipeline runs successively slower and more expensive quality gates: ensures optimised fast feedback

Notes:

The pipeline in the diagram is for a single independent software component.  I will describe how to handle multiple components in a future post.

The imagery is inspired by the build pipeline plugin for the Jenkins tool.

 

Jenkins in the Enterprise

Several months ago I attended a conference about Continuous Delivery. The highlight of the conference was a talk from Kohsuke Kawaguchi the original creator of Jenkins. I enjoyed it so much, I decided to recapture it in this blog.

To the uninitiated, Jenkins is a Continuous Integration (CI) engine. This means basically it is a job scheduling tool designed to orchestrate automation related to all aspects software delivery (e.g. compilation, deployment, testing).

To make this blog clear, two pieces of Jenkins terminology are worth defining upfront:

Job – everything that you trigger to do some work in Jenkins is called a Job. A Job may have multiple pre-, during, and post- steps where a step could be executing a shell script or invoking an external tool.

Build – an execution of a Job (irrespective of whether that Job actually builds code or does something else like test or deploy code). There are numerous ways to trigger a Job to create a new Build, but the most common is by poling your version control repository (e.g. GIT) and automatically triggering when new changes are detected.

Jenkins is to some extent the successor of an earlier tool called Cruise Control that did the same thing, but was much more fiddly to configure. Jenkins is very easy to install (seriously try it), configure, and also very easy to extend with custom plugins. The result is that Jenkins is the most installed CI engine in the world with around 64,000 tracked installations. It also has around 700 plugins available for it which do everything from integrate with version control systems, to executing automated testing, to posting the status of your build to IRC or Twitter.

I’ve used Jenkins on and off since 2009 and when I come back to it, am always impressed at how far it has developed. As practices of continuous delivery have evolved, predominately through new plugins, Jenkins has also kept up (if not enabled experimentation and innovation). Hence the prospect of hearing the original creator talking about the latest set of must have plugins, was perhaps more exciting to me that I should really let on!

Kohsuke Kawaguchi’s lecture about using Jenkins for Continuous Delivery

After a punchy introduction to Jenkins, Kohsuke spent his whole lecture taking us through a list of the plugins that he considers most useful to implementing Continuous Delivery. Rather than sticking to the order that he presented them, I’m going to describe them in my own categories: Staples that I use; Alternative Solutions to Plugins I Use; New To Me.

NB. it is incredibly easy to find the online documentation for these plugins, so I’m not going to go crazy and hyperlink each one, instead please just Google the plugin name and the word Jenkins.

Staples that I use

First up was the Parameterised Builds Plugin. For me this is such a staple that I didn’t even realise it is a plugin. This allows you to alter the behavior of a Job by supplying different vales to input parameters. Kohsuke liked this to passing arguments to a function in code. The alternative to using this is to have lots of similar Job definitions, all of them hard-coded for their specific purpose (bad).

Parameterised Trigger was next. This allows you to chain a sequence Jobs in your first steps towards creating a delivery pipeline. With this plugin, the upstream Job can pass information to the downstream Job. What gets passed is usually a subset of its own parameters. If you want to pass information that is derived inside the upstream Job, you’ll need some Groovy magic… get in touch if you want help.

Arguably this plugin is also the first step towards building what might be a complex workflow of Jenkins Jobs, i.e. where steps can be executed in parallel and follow logical routing, and support goodness like restarting/skipping failed steps.

Kohsuke described a pattern of using this plugin to implement a chain of Jobs where the first Job triggers other Jobs, but does not complete until all triggered Jobs have completed. This was a new idea to me and probably something worth experimenting with.

The Build Pipeline view plugin was next. This is for me the most significant UI development that Jenkins has ever seen. It allows you to visualise a delivery pipeline as a just that, if you’ve not seen it before, click here and scroll down to the screenshot. Interestingly, the plugin hadn’t had an new version published for nearly a year and this was asked about during the Q&A (EDIT: it is now under active development again!). Apparently as can happen with Jenkins plugins, the original authors developed it to do everything they needed and moved on. A year later 3000 more people have downloaded it and thought of their own functionality requests. It then takes a while for one of those people to decide they are going to enhance it, learn how to and submit back. Two key features for me are:

  1. The ability to manually trigger parameterised jobs (I’ve got a Groovy solution for this, get in touch if you need it) (EDIT: now included!)
  2. The ability to define end points of the pipeline so that you can create integrated pipelines for multiple code bases.

The Join plugin allows you to trigger a Job upon the completion of more than one predecessor. This is essential if you want to implement a workflow where (for example) Job C waits for both A and B to complete.

This is good plugin, but I do have a word of caution that you need to patch your Build Pipeline plugin if you want it to display this pattern correctly (ask me if you need instructions).

The Dependency Graph plugin was mentioned as not only a good way of automatically visualising the dependencies between you Jobs, but also allowing you to create basic triggering using the JavaScript UI. This one I’ve used locally, but not tried on a project yet. It seems good, I’m just slightly nervous that it may not be compatible with all triggers. However, on reflection, using it in read-only mode would still be useful and should be low risk.

The Job Config History plugin got a mention. If you use Jenkins and don’t use this, my advice is GET IT NOW! It is extremely useful for tracking Job configuration changes. It highlights Builds that were the first to include a Job configuration change. It allows you do diff old and new versions of a Job configuration in a meaningful way directly in the Jenkins UI. It tells you who made changes and when. AND it lets you roll back to old versions of Jobs (particularly useful when you want to regress a change – perhaps to rule out your Job configurations changes being accountable for a bad change causing a code build failure).

Alternative Solutions to Plugins I Use

Jenkow plugin, the Build Flow plugin and the Job DSL plugin were all recommended as alternative methods of turning individual Jobs into a pipeline or workflow.

Jenkow stood out in my mind for the fact that it stores the configuration in a GIT repository which is an interesting approach. I know I’m contradicting my appreciation of the Job Config History plugin here, but I’m basically just interested to see how well this works. In addition, Jenkow supports defining the workflows in BPMN which I guess is great if you speak it and even if not, good that it opens up use of many free BPMN authoring tools. All of these seem to have been created to support more advanced workflows and I think it is encouraging that people have felt the need to do this.

The only doubt in my mind is how compatible some of these will be with the Build Pipeline plugin which for me is easily the best UI in Jenkins.

New Ideas To Me

The Promoted Builds plugin allows you to manually or automatically assign different promotions to indicate the level of quality of a particular build. This will be a familiar concept to anyone who has used ClearCase UCM where you can update the promotion levels of a label. I think in the context of Jenkins this is an excellent idea and I plan to explore whether it can be used to track sign-off of manual testing activities.

Fingerprinting (storing a database of checksums for build artefacts) was something that I knew Jenkins could do, but have never looked to exploit. The idea is that you can track artefact versions used in different Jobs. This page gives a good intro

The Activiti plugin was also a big eye opener to me. It seems to be an open source business process management (BPM) engine that supports manual tasks and has the key mission statement of being easy to use (like Jenkins). The reason this is of interest to me is that I think it’s support for manual processes could be a good mechanism for gluing Jenkins and continuous delivery into large existing enterprises rather than having some tasks hidden in Jenkins and some hidden elsewhere. I’m also interested in whether this tool could support the formal ITIL-eque release processes (for example CAB approvals) which are still highly unlikely to disappear in a cloud of DevOps smoke.