Abstraction is not Obsoletion – Abstraction is Survival

Successfully delivering Enterprise IT is a complicated, probably even complex problem.  What’s surprising, is that as an industry, many of us are still comfortable accepting so much of the problem as our own to manage.

Let’s consider an albeit very simplified and arguably imprecise view of The “full stack”:

  • Physical electrical characteristics of materials (e.g. copper / p-type silicon, …)
  • Electronic components (resistor, capacitor, transistor)
  • Integrated circuits
  • CPUs and storage
  • Hardware devices
  • Operating Systems
  • Assembly Language
  • Modern Software Languages
  • Middleware Software
  • Business Software Systems
  • Business Logic

When you examine this view, hopefully (irrespective of what you think about what’s included or missing and the order) it is clear that when we do “IT” we are already extremely comfortable being abstracted from detail. We are already fully ready to use things which we do not and may never understand. When we build an eCommerce Platform, an ERP, or CRM system, little thought it given to Electronic components for example.

My challenge to the industry as a whole is to recognise more openly the immense benefit of abstraction for which we are already entirely dependent and to embrace it even more urgently!

Here is my thinking:

  • Electrons are hard – we take them for granted
  • Integrated circuits are hard – so we take them for granted
  • Hardware devices (servers for example) are hard – so why are so many enterprises still buying and managing them?
  • The software that it takes to make servers useful for hosting an application is hard – so why are we still doing this by default?

For solutions that still involve writing code, the most extreme example of abstraction I’ve experienced so far is the Lambda service from AWS.  Some seem to have started calling such things ServerLess computing.

With Lambda you write your software functions and upload them ready for AWS to run for you. Then you configure the triggering event that would cause your function to run. Then you sit back and pay for the privilege whilst enjoying the benefits. Obviously if the benefits outweigh the cost for the service you are making money. (Or perhaps in the world of venture capital, if the benefits are generating lots of revenue or even just active users growth, for now you don’t care…)

Let’s take a mobile example. Anyone with enough time and dedication can sit at home on a laptop and start writing mobile applications. If they write it as a purely standalone, offline application, and charge a small fee for it, theoretically they can make enough money to retire-on without even knowing how to spell server.  But in practice most applications (even if they just rely on in app-adverts) require network enabled services. But for this our app developer still doesn’t need to spell server, they just need to use the API of the online add company e.g. Adwords and their app will start generating advertising revenue. Next perhaps the application relies on persisting data off the device or notifications to be pushed to it. The developer still only needs to use another API to do this, for example Parse can provide that to you all as a programming service.  You just use the software development kit and are completely abstracted from servers.

So why are so many enterprises still exposing themselves to so much of the “full stack” above?  I wonder how much inertia there was to integrated circuits in the 1950s and how many people argued against abstraction from transistors…

To survive is to embrace Abstraction!


[1] Abstraction in a general computer science sense not a mathematical one (as used by Joel Spolsky in his excellent Law of Leaky Abstractions blog.)

Join the DevOps Community Today!

As I’ve said in the past, if your organisation does not yet consider itself to be “doing DevOps” you should change that today.

If I was pushed to say the one thing I love most about the DevOps movement, it would be the sense of community and sharing.

I’ve never experienced anything like it previously in our industry.  It seems like everyone involved is united by being passionate about collaborating in as many ways as possible to improve:

  • the world through software
  • the rate at which we can do that
  • the lives of those working our industry.

The barrier to entry to this community is extremely low, for example you can:

You could also consider attending the DevOps Enterprise Summit London (DOES).  It’s the third DOES event and the first ever in Europe and is highly likely to be one of the most important professional development things you do this year.  Organised by Gene Kim (co-author of The Phoenix Project) and IT Revolution, the conference is highly focused on bringing together anyone interested in DevOps and providing them as much support as humanly possible in two days.  This involves presentations from some of the most advanced IT organisations in the world (aka unicorns), as well as many from those in traditional enterprises who may be on a very similar journey to you.   Already confirmed are talks from:

  • Rosalind Radcliffe talking about doing DevOps with Mainframe systems
  • Ron Van Kemenade CIO of ING Bank
  • Jason Cox about doing DevOps transformation at Disney
  • Scott Potter Head of New Engineering at News UK
  • And many more.

My recommendation is to get as many of your organisation along to the event as possible.  They won’t be disappointed.

Early bird tickets are available until 11th May 2016.

(Full disclosure – I’m a volunteer on the DOES London committee.)

London Banner logo_770x330

Reducing Continuous Delivery Impedance – Part 5: Learned Helplessness

Nearly two years ago, I started this blog series to describe the main challenges I’d experienced trying to implement Continuous Delivery.  At the time, the last post in the series was about four challenges related to people.  Since then I’ve observed a fifth challenge and discovered it has been studied in psychology and has a name.

In this post I’ll attempt to describe how to recognise and tackle Learned Helplessness.  Please share your comments (especially if my Psychology-by-Wikipedia needs guidance).

Through various interactions with clients, at meetups, conferences and even with my own team, I’ve witnessed the following phenomena:

  • Something is done (or not done) on an engagement that makes Continuous Delivery difficult (for example the development team accepting SonarQube saying some seriously defamatory things about their unit test coverage but neglecting even to gradually address this).
  • When questioned:
    • many people already appreciate that this is very wrong.
    • hardly anyone can really explain or justify why this is happening.
    • hardly anyone seems worked up about a solution.

It gave me an impression that people had experienced good practice in the past, but having joined this particular engagement had somehow lost the inclination to do it.  It’s possible that for some people, in the past when things just worked, they didn’t question it, so never really appreciated the value of particular practices.  But I think most people are more analytical than that.  I started to realise that people probably had gone through an experience like this:

  • Joined the engagement, didn’t understand why certain things were / weren’t done, but opted to observe before speaking up.
  • Realised things actually weren’t magically working in some new logic- / experience- defying way.
  • Spoke up but didn’t really get listened to.
  • Spoke up again several times , but didn’t really ever get listened to.
  • Gave up and accepted things for the sorry way that they are.

I figured there must be a name for this, started googling and realised it is called Learned Helplessness, something that was first experimented in the 1960’s by some scientists we can probably assume weren’t dog lovers…

The experiments are best described here on Wikipedia but in extremely simplified form:

  1. some dogs were given no random electric shocks,
  2. some dogs were given shocks and also given a button to press to disable the shocks,
  3. some dogs received shocks at the same time as group 2 dogs but had no button.  Group 3 dogs were paired with Group 2 dogs and were shocked until their Group 2 pair happened to press the button (which was at a random time from the Group 3 dog’s perspective).

The learned helplessness of Group 3 was demonstrated in the second part of the experiments when dogs had the opportunity to cross over a small wall to avoid getting shocks.  Whereas groups 1 and 2 quickly learned how to avoid shocks, group 3 all failed to learn and sat their accepting their fate in pain.


The similarity of the above diagram to diagrams about DevOps like this made me smile!

Subsequent experiments demonstrated the ineffectiveness of threats or even rewards on motivating group 3 to change their location.  Only by physically teaching the group 3 dogs to move more than twice did they learn to overcome the helplessness.  Later experiments also proved the same phenomena in humans (without electricity).

So how do we overcome this?

Here are some things I’m experimenting with:

  • Try some introspection – ask yourself what you’ve learnt to accept, really look around for things that are stopping your project going faster – no matter how obvious, and start to ask why, perhaps at least 5 times.
  • Ask others around you ideally at all levels of experience less, the same and more than you what they think is preventing learning and improvement and consider asking “5 Whys” with them.
  • Pay close attention to new joiners to your team – they are the only ones not yet infected by Learned Helplessness.
  • Be sensitive with people.  No-one wants to be told they are “helpless” or hear your amateur psychobabble.  Tread carefully.
  • If you are looking to impart a change, don’t over estimate the impact of threatening or incentivising the people who need to change – they may already be too apathetic.  Instead expect to need to show them multiple times:
    • That the proposed change is possible.  You need to demonstrate it to them (for example if it relates to Continuous Delivery something like the DevOps Platform may help make things real).
    • That their opinions count and they have an important voice.

How is Learned Helplessness harming your organisation and to what extent are you suffering?


Running the DevOps Platform on Microsoft Azure

As per my last post about GCE sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on the Microsoft Azure cloud.  Warning thanks to Docker Machine, this post is very similar to this earlier one.

1. I needed an Azure account.

2. I logged into my Azure account and didn’t click “view the new Portal”.

3. On the left hand menu, I scrolled down to the bottom (it didn’t look immediately to me like it will scroll so hover) and clicked settings.  Here I was able to see my subscription ID and copy it.

4. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver azure --azure-size Standard_A3 --azure-subscription-id <the ID I just copied> markos01

I was prompted to open a url in my brower, enter a confirmation code, and then login with my Azure credentials.  Credit to Microsoft, this was easier than GCE for which I needed to install the gcloud commandline utility!

You will notice that this is fairly standard.  I picked an Standard_A3 machine type which is roughly equivalent to what we use for AWS and GCP.

5. I waited while a machine was created in Azure containing Docker

6. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

7. I ran the normal startup.sh command as follows:

$ ./startup.sh -m markos01 -c NA

And entered a user name (thanks to this recent enhancement), hey presto

SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval \"$(docker-machine env $MACHINE_NAME)\"
source env.config.sh
Navigate to in your browser to use your new DevOps Platform!

And just to prove it:

$ whois | grep Org
Organization: Microsoft Corporation (MSFT)
OrgName: Microsoft Corporation

8. I had to go to All resources > markos01-firewall > Inbound security rules and added a rule to allow HTTP to my server on port 80.

9. I viewed my new ADOP on Azure hosted instance in (of course…) Chrome!😉

More lovely stuff!


Running the DevOps Platform on Google Compute Engine

Sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on Google Compute Engine (GCE).

1. I needed a Google Compute Engine account.

2. I enabled the Google Compute APIs for my GCE account

3. I installed the Google Cloud commandline API

4. I opened the Google Cloud SDK Shell link that had appeared in my Windows Start menu and ran:

C:\> gcloud auth login

This popped open a Chrome window and asked me to authenticate against my GCE account.

5. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver google \
                 --google-project <a project in my GCE account> \
                 --google-machine-type n1-standard-2 \

You will notice that this is fairly standard.  I picked an n1-standard-2 machine type which is roughly equivalent to what we use for AWS.

6. I waited while a machine was created in Google containing Docker

7. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

8. I ran the normal startup.sh command as follows:

$ git clone https://github.com/Accenture/adop-docker-compose
$ ./startup.sh -m markosadop01 -c NA

And hey presto:

SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval "$(docker-machine env $MACHINE_NAME)"
source env.config.sh
Navigate to in your browser to use your new DevOps Platform!

And just to prove it:

$ whois | grep Org
Registrant Organization: Google Inc.
Admin Organization: Google Inc.
Tech Organization: Google Inc.

9. I had to go to Networks > Firewall rules and added a rule to allow HTTP to my server.

10. I viewed my new ADOP on Google instance in (of course…) Chrome!

Lovely stuff!

Start Infrastructure Coding Today!

* Warning this post contains mildly anti-Windows sentiments *

It has never been easier to get ‘hands-on’ with Infrastructure Coding and Containers (yes including Docker), even if your daily life is spent using a Windows work laptop.  My friend Kumar and I proved this the other Saturday night in just one hour in a bar in Chennai.  Here are the steps we performed on his laptop.  I encourage you to do the same (with an optional side order of Kingfisher Ultra).


  1. We installed Docker Toolbox.
    It turns out this is an extremely fruitful first step as it gives you:

    1. Git (and in particular GitBash). This allows you to use the world’s best Software Configuration Management tool Git and welcomes you into the world of being able to use and contribute to Open Source software on Git Hub.  Plus it has the added bonus of turning  your laptop into something which understands good wholesome Linux commands.
    2. Virtual Box. This is a hypervisor that turns your laptop from being one machine running one Operating System (Windoze) into something capable of running multiple virtual machines with almost any Operating System you want (even UniKernels!).  Suddenly you can run (and develop) local copies of servers that from a software perspective match Production.
    3. Docker Machine. This is a command line utility that will create virtual machines for running Docker on.  It can do this either locally on your shiny new Virtual Box instance or remotely in the cloud (even the Azure cloud – Linux machines of course)
    4. Docker command line. This is the main command line utility of Docker.  This will enable you to download and build Docker images, and turn them into running Docker containers.  The beauty of the Docker command line is that you can run it locally (ideally in GitBash) on your local machine and have it control Docker running on a Linux machine.  See diagram below.
    5. Docker Compose. This is a utility that gives you the ability to run and associate multiple Docker containers by reading what is required from a text file.DockerVB
  2. Having completed step 1, we opened up the Docker Quickstart Terminal by clicking the entry that had appeared in the Windows start menu. This runs a shell script via GitBash that performs the following:
    1. Creates a virtual box machine (called ‘default’) and starts it
    2. Installs Docker on the new virtual machine
    3. Leaves you with a GitBash window open that has the necessary environment variables set to instruct point Docker command line utility to point at your new virtual machine.
  3. We wanted to test things out, so we ran:
    $ docker ps –a


    This showed us that our Docker command line tool was successfully talking to the Docker daemon (process) running on the ‘default’ virtual machine. And it showed us that no containers were either running or stopped on there.

  4. We wanted to testing things a little further so ran:
    $ docker run hello-world
    Hello from Docker.
    This message shows that your installation appears to be working correctly.
    To generate this message, Docker took the following steps:
    The Docker client contacted the Docker daemon.
    The Docker daemon pulled the "hello-world" image from the Docker Hub.
    The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
    The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
    To try something more ambitious, you can run an Ubuntu container with:
    $ docker run -it ubuntu bash
    Share images, automate workflows, and more with a free Docker Hub account:
    For more examples and ideas, visit:


    The output is very self-explanatory.  So I recommend reading it now.

  5. We followed the instructions above to run a container from the Ubuntu image.  This started for us a container running Ubuntu and we ran a command to satisfy ourselves that we were running Ubuntu.  Note one slight modification, we had to prefix the command with ‘winpty’ to work around a tty-related issue in GitBash
    $ winpty docker run -it ubuntu bash
    root@2af72758e8a9:/# apt-get -v | head -1
    apt 1.0.1ubuntu2 for amd64 compiled on Aug  1 2015 19:20:48
    root@2af72758e8a9:/# exit
    $ exit


  6. We wanted to run something else, so we ran:
    $ docker run -d -P nginx:latest


  7. This caused the Docker command line to do more or less what is stated in the previous step with a few exceptions.
    • The –d flag caused the container to run in the background (we didn’t need –it).
    • The –P flag caused docker to expose the ports of Nginx back to our Windows machine.
    • The Image was Nginx rather than Ubuntu.  We didn’t need to specify a command for the container to run after starting (leaving it to run its default command).
  8. We then ran the following to establish how to connect to our Nginx:
    $ docker-machine ip default
     $ docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                           NAMES
    826827727fbf        nginx:latest        "nginx -g 'daemon off"   14 minutes ago      Up 14 minutes>80/tcp,>443/tcp   ecstatic_einstein


  9. We opened a proper web brower (Chrome) and navigated to: using the information above (your IP address may differ). Pleasingly we were presented with the: ‘Welcome to nginx!’ default page.
  10. We decided to clean up some of what we’re created locally on the virtual machine, so we ran the following to:
    1. Stop the Nginx container
    2. Delete the stopped containers
    3. Demonstrate that we still had the Docker ‘images’ downloaded


$ docker kill `docker ps -q`

$ docker rm `docker ps -aq`




$ docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

$ docker images

REPOSITORY                     TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

nginx                          latest              sha256:99e9a        4 weeks ago         134.5 MB

ubuntu                         latest              sha256:3876b        5 weeks ago         187.9 MB

hello-world                    latest              sha256:690ed        4 months ago        960 B



  1. We went back to Chrome and hit refresh. As expected Nginx was gone.
  2. We opened Oracle VM Virtual box from the Windows start machine so that we could observe our ‘default’ machine listed as running.
  3. We ran the following to stop our ‘default’ machine and also observed it then stopping Virtual Box:
    $ docker-machine stop default


  4. Finally we installed Vagrant. This is essentially a much more generic version of Docker-Machine that is capable of creating not just virtual machines in Virtual Box for Docker, but for many other purposes.  For example from an Infrastructure Coding perspective, you might run a virtual machine for developing Chef code.


Not bad for one hour on hotel wifi!

Kumar keenly agreed he would complete the following next steps.  I hope you’ll join him on the journey and Start Infrastructure Coding Today!

  1. Learn Git. It really only takes 10 minutes with this tutorial LINK to learn the basics.
  2. Docker – continue the journey here
  3. Vagrant
  4. Chef
  5. Ansible


Please share any issues following this and I’ll improve the instructions.  Please share  any other useful tutorials and I will add those also.

Neither Carrot Nor Stick

Often when we talk about motivating people, the idiom of having the choice of using a Carrot or a Stick is used. I believe this originates from the conventional wisdom about the best ways to get a mule (as in the four legged horse-like animal) to move. You could try using a carrot, which might be enough of a treat for the mule to move in order to reach it. Or you could try a stick, which might be enough of a threat to get the mule to move in order to avoid being hit.

The idiom works because the carrot is analogous to offering someone an incentive (such as pay rises or bonuses) to get them to do something. The stick is analogous to offering them the threat of punishment (such as being fired or demoted). It’s curious how threat and treat differ by just one letter…

This all makes sense for a horse but not really for people.

The idiom has a major flaw because humans are significantly more complex than animals (all of us!).

Instead if we want to influence someone effectively and sustain-ably, we need to think about how to help them to have an emotional attachment to the thing your are looking to achieve.

I think this comes down to the following:

  • Being open exploring both their and your personal motivations with a view to maximising the achievement of both – in particular the overlap.
  • Starting from an open mind and only looking to agree the desired outcome. This is not the same as agreeing the approach. The approach is key to the satisfaction and motivation of the implementer and key to their attachment to achieving a great solution.
  • Supporting them in their chosen approach taking care not to challenge unnecessarily or do anything that risks eroding their sense of your trust.
  • Being transparent about the consequences of not delivering the desired outcome and clarifying your own role in shielding them from blame and creating a safe environment to operate.

Of course these ideas are not my own.  I would encourage you to explore some of these great materials that I have taken inspiration from:

And I’d love to hear your own ideas and recommended reading.