Jenkins in the Enterprise

Several months ago I attended a conference about Continuous Delivery. The highlight of the conference was a talk from Kohsuke Kawaguchi the original creator of Jenkins. I enjoyed it so much, I decided to recapture it in this blog.

To the uninitiated, Jenkins is a Continuous Integration (CI) engine. This means basically it is a job scheduling tool designed to orchestrate automation related to all aspects software delivery (e.g. compilation, deployment, testing).

To make this blog clear, two pieces of Jenkins terminology are worth defining upfront:

Job – everything that you trigger to do some work in Jenkins is called a Job. A Job may have multiple pre-, during, and post- steps where a step could be executing a shell script or invoking an external tool.

Build – an execution of a Job (irrespective of whether that Job actually builds code or does something else like test or deploy code). There are numerous ways to trigger a Job to create a new Build, but the most common is by poling your version control repository (e.g. GIT) and automatically triggering when new changes are detected.

Jenkins is to some extent the successor of an earlier tool called Cruise Control that did the same thing, but was much more fiddly to configure. Jenkins is very easy to install (seriously try it), configure, and also very easy to extend with custom plugins. The result is that Jenkins is the most installed CI engine in the world with around 64,000 tracked installations. It also has around 700 plugins available for it which do everything from integrate with version control systems, to executing automated testing, to posting the status of your build to IRC or Twitter.

I’ve used Jenkins on and off since 2009 and when I come back to it, am always impressed at how far it has developed. As practices of continuous delivery have evolved, predominately through new plugins, Jenkins has also kept up (if not enabled experimentation and innovation). Hence the prospect of hearing the original creator talking about the latest set of must have plugins, was perhaps more exciting to me that I should really let on!

Kohsuke Kawaguchi’s lecture about using Jenkins for Continuous Delivery

After a punchy introduction to Jenkins, Kohsuke spent his whole lecture taking us through a list of the plugins that he considers most useful to implementing Continuous Delivery. Rather than sticking to the order that he presented them, I’m going to describe them in my own categories: Staples that I use; Alternative Solutions to Plugins I Use; New To Me.

NB. it is incredibly easy to find the online documentation for these plugins, so I’m not going to go crazy and hyperlink each one, instead please just Google the plugin name and the word Jenkins.

Staples that I use

First up was the Parameterised Builds Plugin. For me this is such a staple that I didn’t even realise it is a plugin. This allows you to alter the behavior of a Job by supplying different vales to input parameters. Kohsuke liked this to passing arguments to a function in code. The alternative to using this is to have lots of similar Job definitions, all of them hard-coded for their specific purpose (bad).

Parameterised Trigger was next. This allows you to chain a sequence Jobs in your first steps towards creating a delivery pipeline. With this plugin, the upstream Job can pass information to the downstream Job. What gets passed is usually a subset of its own parameters. If you want to pass information that is derived inside the upstream Job, you’ll need some Groovy magic… get in touch if you want help.

Arguably this plugin is also the first step towards building what might be a complex workflow of Jenkins Jobs, i.e. where steps can be executed in parallel and follow logical routing, and support goodness like restarting/skipping failed steps.

Kohsuke described a pattern of using this plugin to implement a chain of Jobs where the first Job triggers other Jobs, but does not complete until all triggered Jobs have completed. This was a new idea to me and probably something worth experimenting with.

The Build Pipeline view plugin was next. This is for me the most significant UI development that Jenkins has ever seen. It allows you to visualise a delivery pipeline as a just that, if you’ve not seen it before, click here and scroll down to the screenshot. Interestingly, the plugin hadn’t had an new version published for nearly a year and this was asked about during the Q&A (EDIT: it is now under active development again!). Apparently as can happen with Jenkins plugins, the original authors developed it to do everything they needed and moved on. A year later 3000 more people have downloaded it and thought of their own functionality requests. It then takes a while for one of those people to decide they are going to enhance it, learn how to and submit back. Two key features for me are:

  1. The ability to manually trigger parameterised jobs (I’ve got a Groovy solution for this, get in touch if you need it) (EDIT: now included!)
  2. The ability to define end points of the pipeline so that you can create integrated pipelines for multiple code bases.

The Join plugin allows you to trigger a Job upon the completion of more than one predecessor. This is essential if you want to implement a workflow where (for example) Job C waits for both A and B to complete.

This is good plugin, but I do have a word of caution that you need to patch your Build Pipeline plugin if you want it to display this pattern correctly (ask me if you need instructions).

The Dependency Graph plugin was mentioned as not only a good way of automatically visualising the dependencies between you Jobs, but also allowing you to create basic triggering using the JavaScript UI. This one I’ve used locally, but not tried on a project yet. It seems good, I’m just slightly nervous that it may not be compatible with all triggers. However, on reflection, using it in read-only mode would still be useful and should be low risk.

The Job Config History plugin got a mention. If you use Jenkins and don’t use this, my advice is GET IT NOW! It is extremely useful for tracking Job configuration changes. It highlights Builds that were the first to include a Job configuration change. It allows you do diff old and new versions of a Job configuration in a meaningful way directly in the Jenkins UI. It tells you who made changes and when. AND it lets you roll back to old versions of Jobs (particularly useful when you want to regress a change – perhaps to rule out your Job configurations changes being accountable for a bad change causing a code build failure).

Alternative Solutions to Plugins I Use

Jenkow plugin, the Build Flow plugin and the Job DSL plugin were all recommended as alternative methods of turning individual Jobs into a pipeline or workflow.

Jenkow stood out in my mind for the fact that it stores the configuration in a GIT repository which is an interesting approach. I know I’m contradicting my appreciation of the Job Config History plugin here, but I’m basically just interested to see how well this works. In addition, Jenkow supports defining the workflows in BPMN which I guess is great if you speak it and even if not, good that it opens up use of many free BPMN authoring tools. All of these seem to have been created to support more advanced workflows and I think it is encouraging that people have felt the need to do this.

The only doubt in my mind is how compatible some of these will be with the Build Pipeline plugin which for me is easily the best UI in Jenkins.

New Ideas To Me

The Promoted Builds plugin allows you to manually or automatically assign different promotions to indicate the level of quality of a particular build. This will be a familiar concept to anyone who has used ClearCase UCM where you can update the promotion levels of a label. I think in the context of Jenkins this is an excellent idea and I plan to explore whether it can be used to track sign-off of manual testing activities.

Fingerprinting (storing a database of checksums for build artefacts) was something that I knew Jenkins could do, but have never looked to exploit. The idea is that you can track artefact versions used in different Jobs. This page gives a good intro

The Activiti plugin was also a big eye opener to me. It seems to be an open source business process management (BPM) engine that supports manual tasks and has the key mission statement of being easy to use (like Jenkins). The reason this is of interest to me is that I think it’s support for manual processes could be a good mechanism for gluing Jenkins and continuous delivery into large existing enterprises rather than having some tasks hidden in Jenkins and some hidden elsewhere. I’m also interested in whether this tool could support the formal ITIL-eque release processes (for example CAB approvals) which are still highly unlikely to disappear in a cloud of DevOps smoke.

Advertisements

5 thoughts on “Jenkins in the Enterprise

  1. Pingback: DevOps – have fun and don’t miss the point! | markosrendell's Blog

  2. Pingback: Having a DevOps Service Team is a Pattern | markosrendell's Blog

  3. Hi Markos, I was referring to “If you want to pass information that is derived inside the upstream Job, you’ll need some Groovy magic… get in touch if you want help.” in the Parameterized Trigger section. I am trying to pass information between projects. What I’m currently doing is: (1) creating a directory X that will allow sharing data between projects in the pipeline, (2) creating a .prop file P in X with the key/value: shared_space=X, (3) using the EnvInject plugin to inject P into the environment, (4) using groovy to create some data D and writing it to X, and (5) triggering the next project in the build pipeline and passing shared_space=X to that project using the “Predefined parameters” functionality of the Parameterized Trigger plugin.

    That next step in the process then knows to read shared_space in order to access D. It then can pass the shared_space=X to the next project in the same way.

    Is there a better way to do this?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s