Reducing Continuous Delivery Impedance – Part 1: Infrastructure

As I’ve stated previously, I believe continuous delivery should be viewed as a practice that all IT teams should follow.  As I’ve stressed, doing continuous delivery shouldn’t be viewed as a binary capability that you can or cannot do, it should be viewed as the focus of a set of principles to guide practices.

In this series of posts, I’m going to summarise the main challenges to achieving high performing continuous delivery that I’ve been sorry to see but also motivated to try to overcome first-hand over roughly the last 10 years.

I borrowed the electronic engineering term “impedance” for two reasons.  Firstly it basically means resistance which can be defined as:

“refusal to accept to comply with something”, “the use of force to oppose something”, and “the stopping effect exerted by one thing on another”.

All apt ways of characterising the things that get in the way of continuous delivery.  Secondly (in electronics) impedance it is actually resistance experienced to alternating current (AC, i.e. what comes out of mains sockets) which seems appropriate given continuous delivery is cyclical.  Also given the current mania for all of a sudden calling everything digital, analogue things appeal to me.

So how can infrastructure impede doing continuous delivery?

A foundation of Continuous Delivery is the ability to automate, not just application code deployments, but as much as you can, i.e. as far down the stack as possible.  It is unfortunately common in my experience that infrastructure solutions are just not conducive to this.

I believe that the public cloud presents the best experience possible here and is extremely conducive to automation.  For example in AWSCloudFormation allows you to provision your whole (virtual) data centres from scratch.  Heat does the same in Open Stack clouds.  When using the CloudFoundry PaaS, Bosh can be used with AWS and also VMWare underpinned data centres to do the same thing.  Many solutions are becoming readily available.

These are incredibly empowering as they allow you to create environments from version control and scale up and down the number and types of environment you have dynamically during the software release lifecycle (pipeline).  They allow the infrastructure to be treated as code and subjected to software engineering and quality processes.  This puts this in the hands of anyone capable of writing code (which of course can be tempered by governance over the adoption of changes – for example via pull requests in the GitHub).  Fundamentally this significantly helps break down the barrier between Development and Operations teams.

Unfortunately, the fantastic world of public cloud is not available to many, who instead have to accept something on a spectrum from a less sophisticated cloud which whilst offering some self service features e.g. the ability to create new virtual machines but not things like network changes, all the way do to the dreaded manual request for a manually created physical server to be selected, purchased ordered, purchased, delivered, racked, stacked etc. (which can run into months of lead time).  Private cloud is an extremely overloaded term.  I’m open to accepting that some organisations could deliver it at the same level of sophistication and service that public provides, but from my experience so far, it often a long way off and this bleak description is unfortunately familiar and can even be mis-named some type of internal cloud service.

Continuous delivery can be exponentially harder to do the less your infrastructure solution resembles public cloud.  Getting the servers you need can be the first problem with long lead times.  Sometimes I’ve even seen the need to make business cases to convince someone (who seems to have the job of minimizing the number of servers used) that your reasons for asking for servers is genuine.  Obviously this wastes a huge amount of time and effort.

Even once servers have been created, it is not uncommon to struggle to gain access to them. With a public cloud, you are fully in control and can adopt a virtual private networking solution native to the IaaS provider, or implement your own.  I’m not saying this doesn’t need to be done responsibly and with due diligence, but it significantly simplifies things both technically (Software Driven Networking is just a given in the cloud) and also from a people perspective (more on than in a future post).

Once you’ve gained access, you may not be empowered with super-user access and hence unable to implement automation.  You are forced to treat them as “pets” as “opposed to cows”, i.e. it is difficult to destroy and replace them and far more tempting to manually tweak them, creating snowflakes.  Without IaaS, it isn’t uncommon to have to tolerate glaring differences between non-production and production servers, for example not even the same operating system.  It isn’t easy to build and trust a continuous delivery pipeline with that level of inconsistency.

It’s fair to say that automated configuration management tools like Puppet and Chef can make big steps towards improving consistency between even physical servers (although its not unheard of for infrastructure/security etc. teams to actually outlaw these).  But without the mindset that servers are fully disposable, it is all too easy to log on to servers and for integrity to slip (creating a smell).

As I’ve said here there are some opportunities to do some pretty smart things locally on your development workstation.  But even when doing that, if those processes are unable to have any resemblance to what get used downstream, their benefits are limited.

With the above in mind, I highly recommend starting small by allowing development and test to use public cloud and understand first-hand how powerful it can be.  Demonstrate the benefits achieved there as early as possible and start the cultural acceptance of cloud.

Coming soon:  more sources of impedance to doing continuous delivery: Solution Complexity, Software packages, Organisation size, Organisation structure, etc.

Update: here is a video of me talking about this and other sources of impedance:

Advertisements

2 thoughts on “Reducing Continuous Delivery Impedance – Part 1: Infrastructure

  1. Pingback: Reducing Continuous Delivery Impedance – Part 2: Solution Complexity | markosrendell's Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s