Using ADOP and Docker to Learn Ansible

As I have written here, the DevOps Platform (aka ADOP) is an integration of open source tools that is designed to provide the tooling capability required for Continuous Delivery.  Through the concept of cartridges (plugins) ADOP also makes it very easy to re-use automation.

In this blog I will describe an ADOP Cartridge that I created as an easy way to experiment with Ansible.  Of course there are many other ways of experimenting with Ansible such as using Vagrant.  I chose to create an ADOP cartridge because ADOP is so easy to provision and predictable.  If you have an ADOP instance running you will be able to experience Ansible doing various interesting things in under 15 minutes.

To try this for yourself:

  1. Spin up and ADOP instance
  2. Load the Ansible 101 Cartridge (instructions)
  3. Run the jobs one-by-one and in each case read the console output.
  4. Re-run the jobs with different input parameters.

To anyone only loosely familiar with ADOP, Docker and Ansible, I recognise that this blog could be hard to follow so here is a quick diagram of what is going on.

docker-ansible

The Jenkins Jobs in the Cartridge

The jobs do the following things:

As the name suggests, this job just demonstrates how to install Ansible on Centos.  It installs Ansible in a Docker container in order to keep things simple and easy to clean up.  Having build a Docker image with Ansible installed, it tests the image just by running inside the container.

$ ansible --version

2_Run_Example_Adhoc_Commands

This job is a lot more interesting than the previous.  As the name suggests, the job is designed to run some adhoc Ansible commands (which is one of the first things you’ll do when learning Ansible).

Since the purpose of Ansible is infrastructure automation we first need to set up and environment to run commands against.  My idea was to set up an environment of Docker containers pretending to be servers.  In real life I don’t think we would ever want Ansible configuring running Docker containers (we normally want Docker containers to be immutable and certainly don’t want them to have ssh access enabled).  However I felt it a quick way to get started and create something repeatable and disposable.

The environment created resembles the diagram above.  As you can see we create two Docker containers (acting as servers) calling themselves web-node and one calling it’s self db-node.  The images already contain a public key (the same one vagrant uses actually) so that they can be ssh’d to (once again not good practice with Docker containers, but needed so that we can treat them like servers and use Ansible).  We then use an image which we refer to as the Ansible Control Container.  We create this image by installing Ansible installation and adding a Ansible hosts file that tells Ansible how to connect to the db and web “nodes” using the same key mentioned above.

With the environment in place the job runs the following ad hoc Ansible commands:

  1. ping all web nodes using the Ansible ping module: ansible web -m ping
  2. gather facts about the db node using the Ansible setup module: ansible db -m setup
  3. add a user to all web servers using the Ansible user module:  ansible web -b -m user -a “name=johnd comment=”John Doe” uid=1040″

By running the job and reading the console output you can see Ansible in action and then update the job to learn more.

3_Run_Your_Adhoc_Command

This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of having the hard-coded ad hoc Ansible commands listed above, it allows you to enter your own commands when running the job.  By default it pings all nodes:

ansible all -m ping

4_Run_A_Playbook

This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of passing in an ad hoc Ansible command, it lets you pass in an Ansible playbook to also run against the nodes.  By default the playbook that gets run installs Apache on the web nodes and PostgreSQL on the db node.  Of course you can change this to run any playbook you like so long as it is set to run on a host expression that matches: web-node-1, web-node-2, and/or db-node (or “all”).

How the jobs 2-4 work

To understand exactly how jobs 2-4 work, the code is reasonably well commented and should be fairly readable.  However, at a high-level the following steps are run:

  1. Create the Ansible inventory (hosts) file that our Ansible Control Container will need so that it can connect (ssh) to our db and web “nodes” to control them.
  2. Build the Docker image for our Ansible Control Container (install Ansible like the first Jenkins job, and then add the inventory file)
  3. Create a Docker network for our pretend server containers and our Ansible Control container to all run on.
  4. Create a docker-compose file for our pretend servers environment
  5. Use docker-compose to create our pretend servers environment
  6. Run the Ansible Control Container mounting in the Jenkins workspace if we want to run a local playbook file or if not just running the ad hoc Ansible command.

Conclusion

I hope this has been a useful read and has clarified a few things about Ansible, ADOP and Docker.  If you find this useful please star the GitHub repo and or share a pull request!

Bonus: here is an ADOP Platform Extension for Ansible Tower.

ADOP with Pivotal Cloud Foundry

As I have written here, the DevOps Platform (aka ADOP) is an integration of open source tools that is designed to provide the tooling capability required for Continuous Delivery.

In this blog I will describe integrating ADOP and the Cloud Foundry public PaaS from Pivotal.  Whilst it is of course technically possible to run all of the tools found in ADOP on Cloud Foundry, that wasn’t our intention.  Instead we wanted to combine the Continuous Delivery pipeline capabilities of ADOP with the industrial grade cloud first environments that Cloud Foundry offers.

Many ADOP cartridges for example the Java Petclinic one contain two Continuous Delivery pipelines:

  • The first to build and test the infrastructure code and build the Platform Application
  • The second to build and test the application code and deploy it to an environment built on the Platform Application.

The beauty of using a Public PaaS like Pivotal Cloud Foundry is that your platforms and environments are taken care of leaving you much more time to focus on the application code.  However you do of course still need to create an account and provision your environments.

  1. Register here
  2. Click Pivotal Web Services
  3. Create a free tier account
  4. Create and organisation
  5. Create one or more spaces

With this in place you are ready to:

  1. Spin up and ADOP instance
  2. Store your Cloud Foundry credentials in Jenkins’ Secure Store
  3. Load the Cloud Foundry Cartridge (instructions)
  4. Trigger the Continuous Delivery pipeline.

Having done all of this, the pipeline now does the following:

  1. Builds the code (which happens to be the JPetStore
  2. Runs the Unit Test and performs Static Code Analysis using SonarQube
  3. Deploys the code to an environment also known in Cloud Foundry as a Space
  4. Performs functional testing using Selenium and some security testing using OWASP ZAPP.
  5. Performs some performance testing using Gatling.
  6. Kills the running application in environment and waits to verify that Cloud Foundry automatically restores it.
  7. Deploys the application to a multi node Cloud Foundry environment.
  8. Kills one of the nodes in Cloud Foundry and validates that Cloud Foundry automatically avoids sending traffic to the killed node.

The beauty of ADOP is that all of this great Continuous Delivery automation is fully portable and can be loaded time and time again into any ADOP instance running on any cloud.

There is plenty more we could have done with the cartridge to really put the PaaS through its paces such as generating load and watching auto-scaling in action.  Everything is on Github, so pull requests will be warmly welcomed!  If you’ve tried to follow along but got stuck at all, please comment on this blog.

Running the DevOps Platform on Microsoft Azure

As per my last post about GCE sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on the Microsoft Azure cloud.  Warning thanks to Docker Machine, this post is very similar to this earlier one.

1. I needed an Azure account.

2. I logged into my Azure account and didn’t click “view the new Portal”.

3. On the left hand menu, I scrolled down to the bottom (it didn’t look immediately to me like it will scroll so hover) and clicked settings.  Here I was able to see my subscription ID and copy it.

4. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver azure --azure-size Standard_A3 --azure-subscription-id <the ID I just copied> markos01

I was prompted to open a url in my brower, enter a confirmation code, and then login with my Azure credentials.  Credit to Microsoft, this was easier than GCE for which I needed to install the gcloud commandline utility!

You will notice that this is fairly standard.  I picked an Standard_A3 machine type which is roughly equivalent to what we use for AWS and GCP.

5. I waited while a machine was created in Azure containing Docker

6. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

7. I ran the normal startup.sh command as follows:

$ ./startup.sh -m markos01 -c NA

And entered a user name (thanks to this recent enhancement), hey presto

...
SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval \"$(docker-machine env $MACHINE_NAME)\"
source env.config.sh
Navigate to http://52.160.97.159 in your browser to use your new DevOps Platform!

And just to prove it:

$ whois 52.160.97.159 | grep Org
Organization: Microsoft Corporation (MSFT)
OrgName: Microsoft Corporation
OrgId: MSFT

8. I had to go to All resources > markos01-firewall > Inbound security rules and added a rule to allow HTTP to my server on port 80.

9. I viewed my new ADOP on Azure hosted instance in (of course…) Chrome! 😉

More lovely stuff!

 

Running the DevOps Platform on Google Compute Engine

Sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on Google Compute Engine (GCE).

1. I needed a Google Compute Engine account.

2. I enabled the Google Compute APIs for my GCE account

3. I installed the Google Cloud commandline API

4. I opened the Google Cloud SDK Shell link that had appeared in my Windows Start menu and ran:

C:\> gcloud auth login

This popped open a Chrome window and asked me to authenticate against my GCE account.

5. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver google \
                 --google-project <a project in my GCE account> \
                 --google-machine-type n1-standard-2 \
                 markosadop01

You will notice that this is fairly standard.  I picked an n1-standard-2 machine type which is roughly equivalent to what we use for AWS.

6. I waited while a machine was created in Google containing Docker

7. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

8. I ran the normal startup.sh command as follows:

$ git clone https://github.com/Accenture/adop-docker-compose
$ ./startup.sh -m markosadop01 -c NA

And hey presto:

...
SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval "$(docker-machine env $MACHINE_NAME)"
source env.config.sh
Navigate to http://104.197.235.64 in your browser to use your new DevOps Platform!

And just to prove it:

$ whois 104.197.235.64 | grep Org
Registrant Organization: Google Inc.
Admin Organization: Google Inc.
Tech Organization: Google Inc.

9. I had to go to Networks > Firewall rules and added a rule to allow HTTP to my server.

10. I viewed my new ADOP on Google instance in (of course…) Chrome!

Lovely stuff!

New Directions in Operating Systems: Designer Cows and Intensive Farming

On Tuesday I attended the inaugural New Directions in Operating Systems conference in London which was excellently organised by Justin Cormack, and sponsored by :ByteMark (no relation to me!) and RedHat.

I attended with the attitude that since things seem to be moving so fast in this space (e.g. Docker), there would be a high chance that I would get glimpses of the not-too-distant future.  I was not disappointed.

I’m not going to cover every talk in detail. For that, I recommend reading this which someone from Cambridge somehow managed to live blog.  Also most of the presentation links are now up here and I expect videos will follow.

Instead, here two main highlights that I took away.

Designer Cows

If we are aspiring to treat our servers as cattle (as per the popular metaphor from CERN), a number of talks were (to me) about how to make better cows.

The foundation of all solutions was unanimously either bare metal or a XEN hypervisor.  As per the name, this wasn’t the conference for talking about the Open Compute Project or advances like AWS have made recently with C4.  We can think about the hypervisor as our “field” in the cow metaphor.

For the sake of my vegetarian and Indian friends, let’s say the reason for owning cows is to get milk.  Cows (like the servers we use today) have evolved significantly (under our influence) and cows are now very good at producing milk.  But they still have a lot of original “features” not all of which may directly serve our requirements.  For example they can run, they can moo, they can hear etc.

A parallel can be drawn to a web server which similarly may possess its own “redundant features”.  Most web servers can talk the language required by printers, support multiple locales and languages, and can be connected to using a number of different protocols.  There could even be redundancy in traditional “core features” for example supporting multiple users, multiple threads, or even virtual memory.

The downside of all of this redundancy is not just efficiency of storage, processing and maintenance, it’s also the impact on security.  All good sysadmins will understand that unnecessary daemons running on a box (e.g. hopefully even sshd) expand your attack surface by exposing additional attack vectors.  This principle can be taken further.  As Antti Kantee said, drivers can add millions of lines of code to your server.  Every one of these presents the potential to a security defect.

Robert Watson was amongst others to quote Bruce Schneier:

Defenders have to protect against every possible vulnerability, but an attacker only has to find one security flaw to compromise the whole system.

With HeartBleed, Shellshock and Poodle all in the last few months, this is clearly needs some serious attention (for the good of all of us!).

To address this we saw demonstrations of:

  • Rump Kernels which are stripped down to only include filesystems, TCP/IP, device calls and system calls, device drivers.  But no threads, locking, scheduling etc.  So this is more of a basis from which to build a cow but not a working one.
  • Unikernels where all software layers are from the same language framework.  So this is building a whole working cow from bespoke parts with no redundant parts (ears etc!).
  • RumpRun for building Unikernels for running a POSIX application (mathopd a http server in the example), i.e. taking a Rump Kernel and building it into a single application kernel for one single job application.  So another way to build a bespoke cow.
  • MirageOS a programming language for building type-safe Unikernels.  So another way to build a bespoke very safe cow.
  • GenodeOS a completely new operating system taking a new approach to delegating trust through the application stack.  So to some extent a new animal that produces milk with a completely re-conceived anatomy.

Use cases range from the “traditional” like building normal (but much more secure) servers to completely novel such as very light-weight and short-lived servers to start up almost instantaneously, do something quickly and disappear.

Docker and CoreOs with its multi virtual machine-aware and now stripped down container ready functionality were mentioned.  However, whilst CoreOs is as smaller attack surface, if you are running a potentially quite big Docker container on it, you may be adding back lots security vectors.  Possibly as the dependency resolution algorithm for Docker improves, this will progressively reduce the size of Docker containers and hence the number of lines of code and potential vulnerabilities included.

Intensive Farming

Two presentations of the day stood out for generally focussing on a different (and to me more recognisable) level of problem.  First was Gareth Rushgrove’s about Configuration Management.  He covered a very wide range of concepts and tools focused on the management of the configuration of single and fleets of servers over time rather than novel ways to construct operating systems.  He made the statement:

If servers are cattle not pets, we need to talk about fields and farms

Which inspired the title of this blog and led to some discussion (during the presentation and on Twitter) about using active runtime Configuration Management tools like Puppet to manage the adding and removing of infrastructure resources over time.  Even if most of your servers are immutable, it’s quite appealing to think Puppet or Chef could manage both what servers exist and the state of those those more pet-like creatures that do have change over time (in the applications that most good “horse” organisations run).

Whilst if you are using AWS CloudFormation it can provision and update your environment (so called Stack), resultant changes may be heavy-handed and this is clearly a single cloud provider solution.  Terraform is multi cloud provider solution to consider and supports a good preview mode, but doesn’t evolve the configuration on your servers over time.

Gareth also mentioned:

  • OSv which at first I thought was just a operating system query engine like Facebook’s osquery.  But it appears to be a fully API driven operating system.
  • Atomic and OSTree which Michael Scherer covered in the next talk. These look like very interesting solutions for providing confidence and integrity in those bits of the application and operating system that aren’t controlled by Chef, Puppet or DockerFile.

I really feel like I’ve barely done justice to describing even 20% of this excellent conference.  Look out for the videos and look out for the next event.

No animals were harmed during the making of the conference or this blog.

PaaA is great for DevOps too: treat your Platform as a Product!

In this previous post, I chronicled my evolving understanding of PaaS and how it has taught me the virtues of treating your Platform as an Application (PaaA). Here I documented what I believe a self-respecting platform application should do.  In this post I’m going to describe how I’ve seen PaaA help solve the Dev and Ops “problem” in large organisations (“Traditional Enterprises” if you prefer).

DevOps is a highly used/abused term and here I’d like to define it as:

An organisational structure optimised for the fastest release of changes possible within a pre-defined level of acceptable risk associated with making changes. Or simply: the organisational structure that lets you release as fast as possible without losing control and messing up too badly.

This isn’t my first attempt at tackling DevOps teams, also see a blog here and an Ignite here. Of course lots of other good things have been written about it as well, e.g. here from Matt Skelton. I believe PaaA provides a good path.

So this is the traditional diagram for siloed Dev and Ops:

devops1

*Skull and crossbones denote issues which I won’t describe again here.  If you aren’t familiar with the standard story, I suggest viewing this excellent video by RackSpace.

For any organisation with more than one major application component (aka Product), when we add these to the diagram above it starts looking something like this:

devops2

Each application component (or Product) e.g. the website (Site) or the Content Management System (CMS) is affected by both silos. Obviously traditionally the Development (Dev) silo write the code, whilst the Operations (Ops) silo use part-automated processes to release, host, operate, and do whatever necessary to keep the application in service.  Whilst each “Business Application”  exists in both silo, only the Ops team have the pleasure of implementing and supporting “the Platform” i.e. the infrastructure and middleware.

So if silos are bad, perhaps the solution is the following. One giant converged team:

devops3

The problem with this is scale. There is a high likelihood that attempting to adopt this in practice actually fails and sub teams quickly form within it to re-enforce the original silos.

So we can look to subdivide this and make smaller combined Development and Operations teams per application component or small group of them. If that works for you, then fantastic!  This also is effectively the model you are already using when connecting your in-house application to any external or 3rd party web-services (for example Experian).

devops4

In my experience though, it is impractical and inappropriate to have so many different teams within one organisation each looking after their own Platform. Logically (as per the experience of public cloud) and physically (as per traditional data centres and private cloud) major elements of the platform are best to be shared e.g. for economies of scale, or perhaps for application performance.

So what about when you treat your Platform as an Application?  Where could Dev and Ops reside?

The optimum solution in my experience is a follows:

devopsPaaA

The Platform Application (highlighted above by a glowing yellow halo) has a dedicated and independent, fully-combined Development and Operations team and it is treated just like any other Business Application.

Hang on a minute, haven’t I just re-branded what would traditionally be just know as the Operations team as a Platform Application team?

Well no. Firstly the traditional Development team usually has no Operations duties such as following their code all the way to production and then being on call to support it once it is in there. They may not feel accountable for instrumentation and monitoring and operability, perhaps not even performance.  Now they must consider all of these and implement them within the constraints of the capabilities provided by the Platform Application upon which they depend.  By default nothing will be provided for them, it is for them to consume from the Platform Application.  So the Platform Application team are already alleviated of a lot of accountability compared to a traditional Operations team. So long as they can prove the Platform Application is available and meeting service levels, their pagers will not bother them.

Secondly, the platform team are no longer a quite so different from other end-to-end Business Application teams. They manage scope, they develop code, they manage dependencies, they measure quality, they can do Continuous Delivery and they must release they application just like anyone else.  Sure their application is extremely critical in that everyone else (all the products using the platform instance) depends on them, but managing dependencies is very important between Business Applications as well, so isn’t a new problem.

The Platform Application delivery team (which we could also call the Platform Product team) hey have to constantly recognise that their application has to provide a consistent experience to consuming Business Applications. One great technique for this (borrowing from “normal” applications is Semantic Versioning (SemVer) where every change made has to be labelled to provide a meaningful depiction of the compatibility the new version relative to the previous.  In Platform Application terms we can update the SemVer description as:

  1. MAJOR version when you expect consuming Business Applications to need changes e.g. you change the RDMS
  2. MINOR version when you don’t expect consuming Business Applications to break, but need full regression testing, e.g. configuration tuning or a security update
  3. PATCH version when you make backwards-compatible changes expected to have no or a very low change of external impact.  For example if the IaaS API has a change which the Platform Application fully abstracts Business Applications from.

Hopefully it is becoming clear how the powerful and effective the mentality of treating your Platform as an Application (or Product) can be.  Everything that has been invented to help deliver normal applications can be re-used and/or adapted for Platform Applications.  The pattern is also extremely conducive to switching to a public PaaS, in fact, it is exactly how you operate when using on one.

Full disclosure: I run an organisation that develop and manage multiple different Platform Applications for different enterprises.  I am most enthusiastic about this approach because I feel it reconciles a lot of the conventional wisdom around DevOps that I’ve heard about, with what I’ve seen first-hand to be extremely successful in my job working in “traditional Enterprises”.

Reducing Continuous Delivery Impedance – Part 2: Solution Complexity

This is my second post in a series I’m writing about impedance to doing continuous delivery and how to overcome it.  Part 1 about Infrastructure challenges can be found here.  I also promised to write about complexity in continuous delivery in this earlier post about delivery pipelines.

I’m defining “a solution” as the software application or multiple applications under current development than need to work together (and hence be tested in an integrated manner) before release to production.

In my experience, continuous delivery pipelines work extremely well when you have a simple solution with the following convenient characteristics:

  1. All code and configuration is stored in one version control repository (e.g. Git)
  2. The full solution can be deployed all the way to production without needing to test it in conjunction with other applications / components under development
  3. You are using a 3rd party PaaS (treated as a black box, like HerokuGoogle App Engine, or AWS Elastic BeanStalk)
  4. The build is quick to run i.e. less than 5 minutes
  5. The automated tests are quick to run, i.e. minutes
  6. The automated test coverage is sufficient that the risks associated of releasing software can be understood to be lower in value than the benefits of releasing.

The first 3 characteristics are what I am calling “Solution Complexity” and what I want to discuss this post.

Here is a nice simple depiction of an application ticking all the above boxes.

perfect

Developers can make changes in one place, know that their change will be fully tested and know that when deployed into the production platform, their application should behave exactly as expected.  (I’ve squashed the continuous delivery (CD) pipeline into just one box, but inside it I’d expect to see a succession of code deployments, and automated quality gates like this.)

 

But what about when our solution is more complex?

What about if we fail to meet the first characteristic and our code is in multiple places and possibly not all in version control?  This definitely a common problem I’ve seen, in particular for configuration and data loading scripts.  However, this isn’t particularly difficult to solve from a technical perspective (more on the people-side in a future post!).  Get everything managed by a version control tool like Git.

Depending on the SCM tool you use, it may not be appropriate to feel obliged to use one repository.  If you do use multiple, most continuous integration tools (e.g. Jenkins) can be set up in such a way as to support handling builds that consume from multiple repositories.  If you are using Git, you can even handle this complexity within your version control repository e.g. by using sub-modules.

 

What about if your solution includes multiple applications like the following?

complex

Suddenly our beautiful pipeline metaphor is broken and we have a network of pipelines that need to converge (analogous to fan in in electronics).  This is far from a rarity and I would say it is overwhelmingly the norm.  This certainly makes things more difficult and we now have to carefully consider how our plumbing is going to work.  We need to build what I call an “integrated pipeline”.

Designing an integrated pipeline is all about determining the “points of integration” aka POI i.e. the first time that testing involves the combination two or more components.  At this point, you need to record the versions of each component so that they are kept consistent for the rest of the pipeline.  If you fail to do this, earlier quality gates in the pipeline are invalidated.

In the below example, Applications A and B have their own CD pipelines where they will be deployed to independent test environments and face a succession of independent quality gates.  Whenever a version of Application A or B gets to the end of its respective pipeline, instead of going into production, it moves into the Integrated Pipeline and creates a new integrated or composite build number.  After this “POI” the applications progress towards production in the same pipeline and can only move in sync.  In the diagram, version A4 of Application A and version B7 of B have made it into integration build I8.  If integration build I8 makes it through the pipeline it will be worthy to progress to production.

intDepending on the tool you use for orchestration, there are different solutions for achieving the above.  Fundamentally it doesn’t have to be particularly complicated.  You are simply aggregating version numbers in which can easily be stored together in a text document in any format you like (YAMLPOMJSON etc).

Some people reading this may by now be boiling up inside ready to scream “MICRO SERVICES” at their screens.  Micro services are by design independently deploy-able services.  The independence is achieved by ensuring that they fulfill and expect to consume strict contract APIs so that integration with other services can be managed and components can be upgraded independently.  A convention like SemVer can be adopted to manage change to contract compatibility.  I’ve for a while had this tagged in my head as the eBay way or Amazon way of doing this but micro services are now gaining a lot of attention.  If you are implementing micro services and achieving this independence between pipelines, that’s great.  Personally on the one micro services solution I’ve worked on so far, we still opted for an integrated pipeline that operated on an integrated build and produce predictable upgrades to production (we are looking to relax that at some point in the future).

Depending on how you are implementing your automated deployment, you may have deployment automation scripts that live separately to your application code.  Obviously we want to use consistent version of these through out deployments to different environments in the pipeline.  Therefore I strongly advise managing these scripts as a component in the same manner.

What about if you are not using a PaaS?  In my experience, this represents the vast majority of solutions I’ve worked on.  If you are not deploying into a fully managed container, you have to care about the version of the environment that you are deploying into.  The great thing about treating infrastructure as code (assuming you overcome that associated impedance) is that you can treat it like an application, give it a pipeline and feed it into the integrated pipeline (probably at a POI very early).  Effectively you are creating your own platform and performing continuous delivery on that.  Obviously the further your production environment is from being a version-able component like this, the great the manual effort to keep environments in sync.

paas

 

Coming soon: more sources of impedance to doing continuous delivery: Software packages, Organisation size, Organisation structure, etc.

 

(Thanks to Tom Kuhlmann for the graphic symbols.)