Using ADOP and Docker to Learn Ansible

As I have written here, the DevOps Platform (aka ADOP) is an integration of open source tools that is designed to provide the tooling capability required for Continuous Delivery.  Through the concept of cartridges (plugins) ADOP also makes it very easy to re-use automation.

In this blog I will describe an ADOP Cartridge that I created as an easy way to experiment with Ansible.  Of course there are many other ways of experimenting with Ansible such as using Vagrant.  I chose to create an ADOP cartridge because ADOP is so easy to provision and predictable.  If you have an ADOP instance running you will be able to experience Ansible doing various interesting things in under 15 minutes.

To try this for yourself:

  1. Spin up and ADOP instance
  2. Load the Ansible 101 Cartridge (instructions)
  3. Run the jobs one-by-one and in each case read the console output.
  4. Re-run the jobs with different input parameters.

To anyone only loosely familiar with ADOP, Docker and Ansible, I recognise that this blog could be hard to follow so here is a quick diagram of what is going on.


The Jenkins Jobs in the Cartridge

The jobs do the following things:

As the name suggests, this job just demonstrates how to install Ansible on Centos.  It installs Ansible in a Docker container in order to keep things simple and easy to clean up.  Having build a Docker image with Ansible installed, it tests the image just by running inside the container.

$ ansible --version


This job is a lot more interesting than the previous.  As the name suggests, the job is designed to run some adhoc Ansible commands (which is one of the first things you’ll do when learning Ansible).

Since the purpose of Ansible is infrastructure automation we first need to set up and environment to run commands against.  My idea was to set up an environment of Docker containers pretending to be servers.  In real life I don’t think we would ever want Ansible configuring running Docker containers (we normally want Docker containers to be immutable and certainly don’t want them to have ssh access enabled).  However I felt it a quick way to get started and create something repeatable and disposable.

The environment created resembles the diagram above.  As you can see we create two Docker containers (acting as servers) calling themselves web-node and one calling it’s self db-node.  The images already contain a public key (the same one vagrant uses actually) so that they can be ssh’d to (once again not good practice with Docker containers, but needed so that we can treat them like servers and use Ansible).  We then use an image which we refer to as the Ansible Control Container.  We create this image by installing Ansible installation and adding a Ansible hosts file that tells Ansible how to connect to the db and web “nodes” using the same key mentioned above.

With the environment in place the job runs the following ad hoc Ansible commands:

  1. ping all web nodes using the Ansible ping module: ansible web -m ping
  2. gather facts about the db node using the Ansible setup module: ansible db -m setup
  3. add a user to all web servers using the Ansible user module:  ansible web -b -m user -a “name=johnd comment=”John Doe” uid=1040″

By running the job and reading the console output you can see Ansible in action and then update the job to learn more.


This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of having the hard-coded ad hoc Ansible commands listed above, it allows you to enter your own commands when running the job.  By default it pings all nodes:

ansible all -m ping


This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of passing in an ad hoc Ansible command, it lets you pass in an Ansible playbook to also run against the nodes.  By default the playbook that gets run installs Apache on the web nodes and PostgreSQL on the db node.  Of course you can change this to run any playbook you like so long as it is set to run on a host expression that matches: web-node-1, web-node-2, and/or db-node (or “all”).

How the jobs 2-4 work

To understand exactly how jobs 2-4 work, the code is reasonably well commented and should be fairly readable.  However, at a high-level the following steps are run:

  1. Create the Ansible inventory (hosts) file that our Ansible Control Container will need so that it can connect (ssh) to our db and web “nodes” to control them.
  2. Build the Docker image for our Ansible Control Container (install Ansible like the first Jenkins job, and then add the inventory file)
  3. Create a Docker network for our pretend server containers and our Ansible Control container to all run on.
  4. Create a docker-compose file for our pretend servers environment
  5. Use docker-compose to create our pretend servers environment
  6. Run the Ansible Control Container mounting in the Jenkins workspace if we want to run a local playbook file or if not just running the ad hoc Ansible command.


I hope this has been a useful read and has clarified a few things about Ansible, ADOP and Docker.  If you find this useful please star the GitHub repo and or share a pull request!

Bonus: here is an ADOP Platform Extension for Ansible Tower.


Abstraction is not Obsoletion – Abstraction is Survival

Successfully delivering Enterprise IT is a complicated, probably even complex problem.  What’s surprising, is that as an industry, many of us are still comfortable accepting so much of the problem as our own to manage.

Let’s consider an albeit very simplified and arguably imprecise view of The “full stack”:

  • Physical electrical characteristics of materials (e.g. copper / p-type silicon, …)
  • Electronic components (resistor, capacitor, transistor)
  • Integrated circuits
  • CPUs and storage
  • Hardware devices
  • Operating Systems
  • Assembly Language
  • Modern Software Languages
  • Middleware Software
  • Business Software Systems
  • Business Logic

When you examine this view, hopefully (irrespective of what you think about what’s included or missing and the order) it is clear that when we do “IT” we are already extremely comfortable being abstracted from detail. We are already fully ready to use things which we do not and may never understand. When we build an eCommerce Platform, an ERP, or CRM system, little thought it given to Electronic components for example.

My challenge to the industry as a whole is to recognise more openly the immense benefit of abstraction for which we are already entirely dependent and to embrace it even more urgently!

Here is my thinking:

  • Electrons are hard – we take them for granted
  • Integrated circuits are hard – so we take them for granted
  • Hardware devices (servers for example) are hard – so why are so many enterprises still buying and managing them?
  • The software that it takes to make servers useful for hosting an application is hard – so why are we still doing this by default?

For solutions that still involve writing code, the most extreme example of abstraction I’ve experienced so far is the Lambda service from AWS.  Some seem to have started calling such things ServerLess computing.

With Lambda you write your software functions and upload them ready for AWS to run for you. Then you configure the triggering event that would cause your function to run. Then you sit back and pay for the privilege whilst enjoying the benefits. Obviously if the benefits outweigh the cost for the service you are making money. (Or perhaps in the world of venture capital, if the benefits are generating lots of revenue or even just active users growth, for now you don’t care…)

Let’s take a mobile example. Anyone with enough time and dedication can sit at home on a laptop and start writing mobile applications. If they write it as a purely standalone, offline application, and charge a small fee for it, theoretically they can make enough money to retire-on without even knowing how to spell server.  But in practice most applications (even if they just rely on in app-adverts) require network enabled services. But for this our app developer still doesn’t need to spell server, they just need to use the API of the online add company e.g. Adwords and their app will start generating advertising revenue. Next perhaps the application relies on persisting data off the device or notifications to be pushed to it. The developer still only needs to use another API to do this, for example Parse can provide that to you all as a programming service.  You just use the software development kit and are completely abstracted from servers.

So why are so many enterprises still exposing themselves to so much of the “full stack” above?  I wonder how much inertia there was to integrated circuits in the 1950s and how many people argued against abstraction from transistors…

To survive is to embrace Abstraction!


[1] Abstraction in a general computer science sense not a mathematical one (as used by Joel Spolsky in his excellent Law of Leaky Abstractions blog.)

Join the DevOps Community Today!

As I’ve said in the past, if your organisation does not yet consider itself to be “doing DevOps” you should change that today.

If I was pushed to say the one thing I love most about the DevOps movement, it would be the sense of community and sharing.

I’ve never experienced anything like it previously in our industry.  It seems like everyone involved is united by being passionate about collaborating in as many ways as possible to improve:

  • the world through software
  • the rate at which we can do that
  • the lives of those working our industry.

The barrier to entry to this community is extremely low, for example you can:

You could also consider attending the DevOps Enterprise Summit London (DOES).  It’s the third DOES event and the first ever in Europe and is highly likely to be one of the most important professional development things you do this year.  Organised by Gene Kim (co-author of The Phoenix Project) and IT Revolution, the conference is highly focused on bringing together anyone interested in DevOps and providing them as much support as humanly possible in two days.  This involves presentations from some of the most advanced IT organisations in the world (aka unicorns), as well as many from those in traditional enterprises who may be on a very similar journey to you.   Already confirmed are talks from:

  • Rosalind Radcliffe talking about doing DevOps with Mainframe systems
  • Ron Van Kemenade CIO of ING Bank
  • Jason Cox about doing DevOps transformation at Disney
  • Scott Potter Head of New Engineering at News UK
  • And many more.

My recommendation is to get as many of your organisation along to the event as possible.  They won’t be disappointed.

Early bird tickets are available until 11th May 2016.

(Full disclosure – I’m a volunteer on the DOES London committee.)

London Banner logo_770x330

Start Infrastructure Coding Today!

* Warning this post contains mildly anti-Windows sentiments *

It has never been easier to get ‘hands-on’ with Infrastructure Coding and Containers (yes including Docker), even if your daily life is spent using a Windows work laptop.  My friend Kumar and I proved this the other Saturday night in just one hour in a bar in Chennai.  Here are the steps we performed on his laptop.  I encourage you to do the same (with an optional side order of Kingfisher Ultra).


  1. We installed Docker Toolbox.
    It turns out this is an extremely fruitful first step as it gives you:

    1. Git (and in particular GitBash). This allows you to use the world’s best Software Configuration Management tool Git and welcomes you into the world of being able to use and contribute to Open Source software on Git Hub.  Plus it has the added bonus of turning  your laptop into something which understands good wholesome Linux commands.
    2. Virtual Box. This is a hypervisor that turns your laptop from being one machine running one Operating System (Windoze) into something capable of running multiple virtual machines with almost any Operating System you want (even UniKernels!).  Suddenly you can run (and develop) local copies of servers that from a software perspective match Production.
    3. Docker Machine. This is a command line utility that will create virtual machines for running Docker on.  It can do this either locally on your shiny new Virtual Box instance or remotely in the cloud (even the Azure cloud – Linux machines of course)
    4. Docker command line. This is the main command line utility of Docker.  This will enable you to download and build Docker images, and turn them into running Docker containers.  The beauty of the Docker command line is that you can run it locally (ideally in GitBash) on your local machine and have it control Docker running on a Linux machine.  See diagram below.
    5. Docker Compose. This is a utility that gives you the ability to run and associate multiple Docker containers by reading what is required from a text file.DockerVB
  2. Having completed step 1, we opened up the Docker Quickstart Terminal by clicking the entry that had appeared in the Windows start menu. This runs a shell script via GitBash that performs the following:
    1. Creates a virtual box machine (called ‘default’) and starts it
    2. Installs Docker on the new virtual machine
    3. Leaves you with a GitBash window open that has the necessary environment variables set to instruct point Docker command line utility to point at your new virtual machine.
  3. We wanted to test things out, so we ran:
    $ docker ps –a


    This showed us that our Docker command line tool was successfully talking to the Docker daemon (process) running on the ‘default’ virtual machine. And it showed us that no containers were either running or stopped on there.

  4. We wanted to testing things a little further so ran:
    $ docker run hello-world
    Hello from Docker.
    This message shows that your installation appears to be working correctly.
    To generate this message, Docker took the following steps:
    The Docker client contacted the Docker daemon.
    The Docker daemon pulled the "hello-world" image from the Docker Hub.
    The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
    The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
    To try something more ambitious, you can run an Ubuntu container with:
    $ docker run -it ubuntu bash
    Share images, automate workflows, and more with a free Docker Hub account:
    For more examples and ideas, visit:


    The output is very self-explanatory.  So I recommend reading it now.

  5. We followed the instructions above to run a container from the Ubuntu image.  This started for us a container running Ubuntu and we ran a command to satisfy ourselves that we were running Ubuntu.  Note one slight modification, we had to prefix the command with ‘winpty’ to work around a tty-related issue in GitBash
    $ winpty docker run -it ubuntu bash
    root@2af72758e8a9:/# apt-get -v | head -1
    apt 1.0.1ubuntu2 for amd64 compiled on Aug  1 2015 19:20:48
    root@2af72758e8a9:/# exit
    $ exit


  6. We wanted to run something else, so we ran:
    $ docker run -d -P nginx:latest


  7. This caused the Docker command line to do more or less what is stated in the previous step with a few exceptions.
    • The –d flag caused the container to run in the background (we didn’t need –it).
    • The –P flag caused docker to expose the ports of Nginx back to our Windows machine.
    • The Image was Nginx rather than Ubuntu.  We didn’t need to specify a command for the container to run after starting (leaving it to run its default command).
  8. We then ran the following to establish how to connect to our Nginx:
    $ docker-machine ip default
     $ docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                           NAMES
    826827727fbf        nginx:latest        "nginx -g 'daemon off"   14 minutes ago      Up 14 minutes>80/tcp,>443/tcp   ecstatic_einstein


  9. We opened a proper web brower (Chrome) and navigated to: using the information above (your IP address may differ). Pleasingly we were presented with the: ‘Welcome to nginx!’ default page.
  10. We decided to clean up some of what we’re created locally on the virtual machine, so we ran the following to:
    1. Stop the Nginx container
    2. Delete the stopped containers
    3. Demonstrate that we still had the Docker ‘images’ downloaded


$ docker kill `docker ps -q`

$ docker rm `docker ps -aq`




$ docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

$ docker images

REPOSITORY                     TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

nginx                          latest              sha256:99e9a        4 weeks ago         134.5 MB

ubuntu                         latest              sha256:3876b        5 weeks ago         187.9 MB

hello-world                    latest              sha256:690ed        4 months ago        960 B



  1. We went back to Chrome and hit refresh. As expected Nginx was gone.
  2. We opened Oracle VM Virtual box from the Windows start machine so that we could observe our ‘default’ machine listed as running.
  3. We ran the following to stop our ‘default’ machine and also observed it then stopping Virtual Box:
    $ docker-machine stop default


  4. Finally we installed Vagrant. This is essentially a much more generic version of Docker-Machine that is capable of creating not just virtual machines in Virtual Box for Docker, but for many other purposes.  For example from an Infrastructure Coding perspective, you might run a virtual machine for developing Chef code.


Not bad for one hour on hotel wifi!

Kumar keenly agreed he would complete the following next steps.  I hope you’ll join him on the journey and Start Infrastructure Coding Today!

  1. Learn Git. It really only takes 10 minutes with this tutorial LINK to learn the basics.
  2. Docker – continue the journey here
  3. Vagrant
  4. Chef
  5. Ansible


Please share any issues following this and I’ll improve the instructions.  Please share  any other useful tutorials and I will add those also.

Start DevOps Today!

Last week I spent an inspiring 3 days at the DevOps Enterprise Summit (DOES15) in San Francisco.  I had the pleasure of speaking but most importantly learning from everyone I heard present and chatted with.  The most interesting thing about events like this is that they can change your perspective on things you felt you knew well.  For example fundamentals such as “What is DevOps these days?”.

We all like to create taxonomies to make sense of things and I found myself grouping the practitioners I spoke to into 3 categories.

  1. People working for the DevOps poster-childs (Netflix, Google etc.)  An inspiration to us all through what they achieve both with IT and through their willingness to be open and share.
  2. People working for large enterprises who are on tremendous journeys of DevOps transformation, have fantastic stories to tell, and are still living day-to-day around many things they would like to change dramatically.
  3. People who haven’t yet built up momentum around DevOps and seemed almost overwhelmed by the stories and performance of people in categories 1 and 2.

Naturally it was category 3 that I felt most drawn to understanding and talking to them inspired me to write this post.

Home truth #1: Improving IT is not at all new to DevOps(!)

Whether you have just heard the name, or have been doing it for several years, if you are ambitious and passionate about what you do, you are without a doubt already committed to improving the IT function (and hence directly the businesses) in which you operate.

Home truth #2: writing off DevOps as just fashionable name for improving IT is a mistake.

I believe the “doing DevOps” is something every organisation must consciously start doing – today (if they haven’t already).  It doesn’t take everyone (at first), or even everyone in particular business unit, department or team.  It just takes at least two people to grit their teeth and agree that they are going to consciously make a collaborative effort to improving IT with a new level of energy, ambition, and a “new” name

So here is what will be different once you start “doing DevOps”.
  • Just the act of starting something new and exciting will hopefully immediately inspire new levels of energy, motivation, ambition, and sense of purpose (perhaps even create flow).
  • You now have a useful name for your efforts to improve IT and one you can research to tap into the wealth of blogs, podcasts, meetups, conferences, Open Source, tools, and lessons learnt out there.
  • You can now relate the things you are doing (and trying to do) to the practices demonstrated by DevOps poster children.
  • You are now part of the huge support network in the form of the DevOps community which has growing dramatically built on a solid foundation of inclusivity and sharing.
  • Your new community is filled with individuals and companies fully motivated by the opportunity to share their experiences for the greater good of our industry and the greater good for society and humanity.
  • You have a better chance than ever of getting internal investment in your cause (DevOps being in vogue has advantages).
  • By stating (especially in public) your ambition and commitment to build a lean, automated, responsive, reliable level IT organisation, you are now more likely to be able to grow an inspired workforce and more likely to attract talent from outside.
So my advice (especially to people who identify with Category 3) is as follows:
  • Don’t let anyone tell you that you aren’t doing DevOps (it’s a journey).
  • If you are doing DevOps on any scale in your company don’t let anyone convince you that you aren’t key to the future success of the organisation (YOU ARE!)
  • Don’t feel disheartened by where you think your organisation is today relative to some kind of DevOps utopia / companies you read about / your perceived view of your peers. It’s the rate in which you can learn to continuously improve IT within your organisation that will secure your organisation’s future and not precisely where you all are today.
  • Don’t down play your ambitions, your hard learnt lessons, or your achievements to date – celebrate and share them!
  • Watch the videos of DOES15, YOU WILL BE INSPIRED.

So if we are treating our Platform as an Application (PaaA), what should it do?

In my last post, I described the eureka moment I’d had whilst using Cloud Foundry.  I’d suddenly realised the fantastic benefits of treating your Platform as an Application.  I’d then decided this pattern needed it’s very own acronym “PaaA” to highlight the distinction from using a “Platform Application Delivered by someone else as a Service” (i.e. a what is traditionally called PaaS).

In hindsight it is really obvious that PaaA is a good idea – if it wasn’t, why would PaaS providers (who manage platforms commercially on industrialized scale for a living) bother doing it? In this post I’m going to define the features that I think any self-respecting Platform Application should have.

A quick aside: Should you build or buy/reuse a Platform Application?  There are plenty of applications available to buy/reuse:

In my opinion, since we’re treating our Platform as an Application, the usual build vs buy logic should be applied!  However (not to avoid the question entirely), my advice is that if you are in a greenfield scenario you should try buy/reuse, and if you already have a platform, start an initiative to move towards treating what you already have platform-wise more like an application.

So if we are treating our Platform like an Application (PaaA), what should it do?

Firstly we need a name for the part of the IT solution which is not the platform.  It’s tempting to take a platform-centric position and call it the Guest Application (since it resides and functions on the platform). I fear some may consider this name derogatory, so for lack of a alternative, I’m calling it the Business Application. In terms of cardinality, I would expect any Platform Application to host one or more Business Applications.

The most basic requirement of a Platform Application is that it can provide the run-time operating system and middleware dependencies needed for the Business Application to run.  For example if the Business Application is Java Web Application requiring a Servlet Container, the Platform Application must provide that.  If an RDMS e.g. PostgreSQL is required, the platform must of course also provide that.  To put it another way, we’re treating the whole environment minus the Business Application as being something the Platform Application must supply.

All applications should be build-able from version control and releasable with a unique build number.  A Platform Application is no different and they also need a fully automated and repeatable installation (platform deployment) process, i.e. you should be able to fully destroy and recreate your whole platform (aka phoenix it) with great confidence.  You should also be able to make confident statements like “We completed all our testing on version of the platform”. (My use of version number resembling Semantic Versioning was deliberate as I believe it is very useful for Platform Applications.)

A Platform Application should abstract the Business Applications that run on top of it from the underlying infrastructure i.e. the servers, storage and network.  Whilst doing this, the Platform Application must provide infrastructure features to level of sophistication required by the hosted Business Applications for example auto-scaling and high-availability / anti-fragility.  A nice-to-have feature is some built-in independence of the underlying infrastructure solution. This provides a level of portability to deploy the Platform Application to different physical, virtual and cloud infrastructure providers.

A Platform Application should work coherently with your software delivery lifecycle. For example it must have a cost effective solution for supporting multiple isolated test environments.  For example Cloud Foundry instances supports multi-tenancy through Spaces of which you can create multiple per Platform Application instance.

A Platform Application must make the process of performing fully automated deployments of the Business Applications onto it trivial.  Of course the release packages of the Business Applications must conform to the required specifications.  This includes both the binary artefacts format e.g. War files and any required configuration (aka manifest) files.

There are a number of main security concerns for a Platform Application.  It needs an authentication and authorisation solution for controlling administration of the platform e.g. who can perform Business Application deployments or create new environments.  The platform must have an appropriate solution for securely managing keys and certificates required by the Business Applications.  Finally the Platform Application must support the access protocols required by the Business Application e.g. https.

There are a number of logging concerns for a Platform Application.  Of course it should create adequate logs of its own so that it can be operated successfully.  It also needs a solution for managing the logs of the Business Applications, for example an inbuilt aggregation service  that could be based on LogStash, Kabana, ElasticSearch.

Finally there are monitoring concerns for a Platform Application.  Of course it needs to monitor itself and the underlying infrastructure that it is managing.  It also needs to provide a standardised solution for monitoring the Business Applications deployed onto it.


I’d love to hear if anyone thinks of other core features that I should add to the list.


I finally get PaaS – they should actually be called Platform as an Application

I’ve been aware of Platforms-as-a-Service (PaaS) for a few years, but I wouldn’t say I completely understood how important they are, until now.  In part I blame the name which leads me to thinking PaaS is all about receiving a service.  Instead I believe a pattern of treating your Platform as an Application (PaaA) is where the real value lies.

In this post I’m going to share the evolution of my understanding and hopefully leave you as fired up about Paa[AS]’ as I am.

The first PaaS that caught my attention was Google App Engine.  My understanding was that it was basically:

  • a place online where Google will host your applications for you
  • something that only worked when you write “special” compatible applications
  • not something that would change my life (i.e. my day job delivering large-scale Enterprise IT systems).

The next thing that caught my attention was Heroku which to me was basically:

  • a place that supported more “normal” applications like Ruby on Rails
  • a realization that if this trend of supporting more and more application types (middleware solutions) continues, using a PaaS could be something that I’d end up doing.

At this point I realized that I’d actually already used a PaaS when I wrote my very first static HTML website back in 2000.  The hosting provider was providing me with a very simple PaaS.  It only served static content, but none the less it was a PaaS and had already proved to me that the service model works.

So my understanding of a PaaS was that it was a service supplied by someone else to provide the platform to deliver your applications.  And I was starting to imagine that the trajectory I’d seen going from a PaaS that supported static content to one supporting full-blown Rails applications meant that soon they’d be applicable to the types of IT system I worked on.

Late last year, I had the privilege of my day job putting me in close proximity with a PaaS called CloudFoundry.  My understanding evolved again:

  • this time I was responsible for installing PaaS software myself (and hosting it in the cloud)
  • this time it was fully extensible and I had the responsibility of doing this to meet the middleware requirements (RabbitMQ, Cassandra etc.)
  • I was now expected to be a PaaS provider

Building and supporting environments for test and production was nothing new to me, but this was the first time I was doing it using a PaaS.  Yet I wasn’t receiving the platform as a service from someone externally, I was delivering it using a software application (Cloud Foundry) still referred to as a PaaS.  I’d somehow jumped from thinking one day I’d receive the benefits of a PaaS service from someone else to realizing now I was having to provide one… I felt a bit cheated!

So the obvious question in my mind was: will running a PaaS make my life easier and of course improve how well I could provide environments for test and production?

The answer wasn’t immediate. Getting up and running with Cloud Foundry was definitely a steeper learning curve than using something like Cloud Formation by Amazon. Suddenly there was another application to deal with and this one wasn’t even created by the development team. It was open source and complex and quite opinionated about how to do things.

The developers weren’t in love either. They had more to learn and more rules to follow – some rules that even the Ops team couldn’t explain…

However over time (weeks not months or years) we stabilised the platform and started to enjoy a few pretty great things:

  • we could easily rebuild our entire data centre in about 1 hour including everything from networks up to all test environments including a working application and data
  • adding new applications was extremely easy – efficiently cloning our continuous delivery pipeline in Jenkins was our new challenge (which we solved)!
  • predictability across test environments and production was higher than I’d EVER seen (and I’ve spent years solving that)
  • developers had a very clean relationship with the platform and found it a very productive eco-system to work in

In short, I was very happy and now a bit fan of PaaS. But it still took another month before I really felt like I understood why.

The answer is not the fact that Cloud Foundry is some kind of magic application, the answer is that it IS an application.

Too understand why PaaS is so important, I now actually think of it as Platform-as-an-Application (PaaA?!). The true value does not lie in the fact that someone else could deliver it to you as a service. The true value is treating everything that your application relies on as a configuration-manageable, version-able, testable, release-label software application. Naturally this is complicated and consists of multiple sub-components all subject to the same rigor, but managed as one application.

Whether you achieve this by reusing a pre-written PaaS application like CloudFoundry, Stratos or OpenShift, or whether your write your own is up to you (and I suggest subject to the normal buy/build considerations).  Whether you host it for yourself (on cloud infrastructure or not) or whether you use a public PaaS (e.g. Pivotal’s public Cloud Foundry) is not the point.  The thing that PaaS teaches us is to treat your platform as an application.

It’s a lovely idea from a DevOps maturity perspective. We’ve gone from Ops having manual silo-ed processes all the way to the logical extreme: the platform is treated no different from the application. We are all doing the same things!

It’s lovely from a Continuous Delivery practice because automated build and deployment of code is a native part of your platform application.  No extra work to do!

Another perhaps understated term is infrastructure as code (if you define infrastructure as used in IaaS). It essential to implementing your platform as an application but the name could leave you just thinking you should use it write code to manipulate your servers, storage and network. Yes you should, that’s great! But not treating this code as a coherent part of one logical application that is capable of (and optimised for) hosting your application is missing out.

So where next?  It’s not hard to think of ways of making platform applications that are more powerful and more operable. Already some PaaS applications e.g. Stratos are thinking hard how to realise similar benefits earlier in the lifecycle i.e. via App Factory.

I’m sure there will be an explosion of both PaaS applications and PaaS service providers all offering richer functionality, broader compatibility and higher service levels. Naturally Docker will significantly help implement compatibility of applications to PaaS’. For me, I plan to apply the Platform-as-an-Application pattern as widely as I can and of course try out as many more pre-existing PaaS applications as I can (starting with Stratos and App Factory).