How Psychologically Safe does your team feel?

As per this article, Google conducted a two-year long study into what makes their best teams great and found psychological safety to be the most important factor.

As per Wikipedia, psychological safety can be defined as:

“feeling able to show and employ one’s self without fear of negative consequences of self-image, status or career”

It certainly seems logical to me that creating a safe working environment where people are free to share their individual opinions and avoid group think, is highly important.

So the key question is how can you foster psychological safety?

Some of the best advice I’ve read was from this this blog by Steven M Smith.  He suggests performing the paper-based voting exercise to measure safety.

Whilst we’ve found this to be good technique, the act of screwing up bits of paper is tedious and hard to do remotely.  Hence we’ve created an online tool:

https://safetychecker.herokuapp.com/

Please give it a go and share your experiences!

Advertisements

How my team do root cause analysis

This blog is more or less a copy and paste of a wiki page that my team at work use as part of our Problem Management process.  It is heavily inspired by lots of good writing about blameless postmortems for example from Etsy and the Beyond Blame book.  Hope you find it useful.

RCA Approach

 

This page describes a 7 step approach to performing RCAs.  The process belongs to all of us, so please feel free to update it.

Traditionally RCA stands for Root Cause Analysis.  However, there are two problems with this:

  1. It implies there is one root cause.  In practice it is often a cocktail of contributing causes as well as negative (and sometimes positive) outcomes
  2. The name implies that we are on a hunt for a cause.  We are on a hunt for causes, but only to help us identify preventative actions.  Not just to solve a mystery or worse find an offender to punish.

Therefore RCA is proposed to stand for Recurrence Countermeasure Analysis.

Step 1: Establish “the motive”

Ask the following:

Question: Does anyone think anyone in our team did something deliberately malicious to cause this?  i.e. they consciously carried out actions that they knew would cause this or something of similar negative consequences or they clearly understood the risks but cared so little that they weren’t deterred?

and

Question: Does anyone think anyone outside our team… (as above).

The assumption here is that the answer is “NO” to both questions.  If it is “NO”, we can now proceed with a blameless manner, i.e. never stopping our analysis at a point where a person should (or could) have done something different.

If either answers are “YES”.  This is beyond the scope of this approach.

Step 2: Restate our meaning of “Blameless”

Read aloud the following to everyone participating in the RCA:

“We have established that we don’t blame any individual either internal or external to our organisation for the incident that has triggered this exercise.  Our process has failed us and needs our collective input to improve it.  If at any point during the process anyone starts to doubt this statement or act like they no longer believe it we must return to Step 1.  Everyone is responsible for enforcing this.

What is at stake here is not just getting to the bottom of this incident, it’s getting to the bottom of this incident and every future occurrence of the same incident.  If anyone feels mistreated by this process, by human nature they will take actions in the future to disguise their actions to limit blame and this will damage our ability to continuously improve.”

Step 3: Restate the rules

During this process we will follow these rules:

  1. Facts must not be subjective.  If an assertion of fact cannot be 100% validated we should agree and capture our confidence level (e.g. High, Medium, Low).  We must also capture the actions that we could do to validate it.
  2. If we don’t have enough facts, we will prioritise the facts that we need go away and validate before reconvening to continue.  Before suspending the process, agree a full list of “Things we wish we knew but don’t know”, capture the actions that we could do to validate them and prioritise the discovery.
  3. If anyone feels uncomfortable during the process due to:
    1. Blame
    2. Concerns with the process
    3. Language or tones of voice
    4. Their ability have their voice heard they must raise it immediately.
  4. We are looking for causes only to inform what we can do to prevent re-occurrence, not to apportion blame.

Step 4: Agree a statement to describe the incident that warranted this RCA

Using an open discussion attempt to reach a consensus over a statement that describes the incident that warranted this RCA.  This must identify the thing (or things) that we don’t want to happen again (including all negative side-effects).  Don’t forget the impact on people e.g. having to work late to fix something.  Don’t forget to capture the problem from all perspectives.

Write this down somewhere everyone can see.

Step 5: Mark up the problem statement

Look at the problem statement and identify and underline every aspect of the statement that someone could ask “Why” about.  Try to take an outsider view, even if you know the answer or think something cannot be challenged, it is still in scope for being underlined.

Step 6: Perform the analysis

Document the “Why” question related to each underlined aspect in the problem statement.

For each “Why” question attempt to agree on one direct answer.  If you find you have more than one direct answer, split your “Why” question into enough more specific “Why” questions so that your answers can be correlated directly.

Mark up the answers as you did in Step 5.

Repeat this step until you’ve built up a tree with at least 5 answers per branch and at least 3 branches.  If you can’t find at least 3 branches, you need to ask more fundamental “Why” questions about your problem statement and answers.  If you can’t ask and answer more than 5 “Why”s per branch possibly you are taking too large steps.

Do not stop this process with any branch ending on a statement that could be classified “human error”.  (Refer to what we agreed at step 1).

Do not stop this process at something that could be described as a “third party error”.  Whilst the actions of third parties may not be directly under our control, we have to maintain a sense of accountability for the problem statement where if necessary we should have implemented measures to protect ourselves from the third party.

Step 7: Form Countermeasure Hypothesis

Review the end points of your analysis tree and make hypothesis’ about actions that could be taken to prevent future re-occurrences. Like all good hypothesis’ these should be specific and testable.

Use whatever mechanism you have for capturing and prioritising the proposed work to track the identified actions and get them implemented.  Use your normal approach to stating acceptance criteria and don’t close the actions unless they satisfy the tests that they have been effective.

 

Abstraction is not Obsoletion – Abstraction is Survival

Successfully delivering Enterprise IT is a complicated, probably even complex problem.  What’s surprising, is that as an industry, many of us are still comfortable accepting so much of the problem as our own to manage.

Let’s consider an albeit very simplified and arguably imprecise view of The “full stack”:

  • Physical electrical characteristics of materials (e.g. copper / p-type silicon, …)
  • Electronic components (resistor, capacitor, transistor)
  • Integrated circuits
  • CPUs and storage
  • Hardware devices
  • Operating Systems
  • Assembly Language
  • Modern Software Languages
  • Middleware Software
  • Business Software Systems
  • Business Logic

When you examine this view, hopefully (irrespective of what you think about what’s included or missing and the order) it is clear that when we do “IT” we are already extremely comfortable being abstracted from detail. We are already fully ready to use things which we do not and may never understand. When we build an eCommerce Platform, an ERP, or CRM system, little thought it given to Electronic components for example.

My challenge to the industry as a whole is to recognise more openly the immense benefit of abstraction for which we are already entirely dependent and to embrace it even more urgently!

Here is my thinking:

  • Electrons are hard – we take them for granted
  • Integrated circuits are hard – so we take them for granted
  • Hardware devices (servers for example) are hard – so why are so many enterprises still buying and managing them?
  • The software that it takes to make servers useful for hosting an application is hard – so why are we still doing this by default?

For solutions that still involve writing code, the most extreme example of abstraction I’ve experienced so far is the Lambda service from AWS.  Some seem to have started calling such things ServerLess computing.

With Lambda you write your software functions and upload them ready for AWS to run for you. Then you configure the triggering event that would cause your function to run. Then you sit back and pay for the privilege whilst enjoying the benefits. Obviously if the benefits outweigh the cost for the service you are making money. (Or perhaps in the world of venture capital, if the benefits are generating lots of revenue or even just active users growth, for now you don’t care…)

Let’s take a mobile example. Anyone with enough time and dedication can sit at home on a laptop and start writing mobile applications. If they write it as a purely standalone, offline application, and charge a small fee for it, theoretically they can make enough money to retire-on without even knowing how to spell server.  But in practice most applications (even if they just rely on in app-adverts) require network enabled services. But for this our app developer still doesn’t need to spell server, they just need to use the API of the online add company e.g. Adwords and their app will start generating advertising revenue. Next perhaps the application relies on persisting data off the device or notifications to be pushed to it. The developer still only needs to use another API to do this, for example Parse can provide that to you all as a programming service.  You just use the software development kit and are completely abstracted from servers.

So why are so many enterprises still exposing themselves to so much of the “full stack” above?  I wonder how much inertia there was to integrated circuits in the 1950s and how many people argued against abstraction from transistors…

To survive is to embrace Abstraction!

 


[1] Abstraction in a general computer science sense not a mathematical one (as used by Joel Spolsky in his excellent Law of Leaky Abstractions blog.)

Reducing Continuous Delivery Impedance – Part 2: Solution Complexity

This is my second post in a series I’m writing about impedance to doing continuous delivery and how to overcome it.  Part 1 about Infrastructure challenges can be found here.  I also promised to write about complexity in continuous delivery in this earlier post about delivery pipelines.

I’m defining “a solution” as the software application or multiple applications under current development than need to work together (and hence be tested in an integrated manner) before release to production.

In my experience, continuous delivery pipelines work extremely well when you have a simple solution with the following convenient characteristics:

  1. All code and configuration is stored in one version control repository (e.g. Git)
  2. The full solution can be deployed all the way to production without needing to test it in conjunction with other applications / components under development
  3. You are using a 3rd party PaaS (treated as a black box, like HerokuGoogle App Engine, or AWS Elastic BeanStalk)
  4. The build is quick to run i.e. less than 5 minutes
  5. The automated tests are quick to run, i.e. minutes
  6. The automated test coverage is sufficient that the risks associated of releasing software can be understood to be lower in value than the benefits of releasing.

The first 3 characteristics are what I am calling “Solution Complexity” and what I want to discuss this post.

Here is a nice simple depiction of an application ticking all the above boxes.

perfect

Developers can make changes in one place, know that their change will be fully tested and know that when deployed into the production platform, their application should behave exactly as expected.  (I’ve squashed the continuous delivery (CD) pipeline into just one box, but inside it I’d expect to see a succession of code deployments, and automated quality gates like this.)

 

But what about when our solution is more complex?

What about if we fail to meet the first characteristic and our code is in multiple places and possibly not all in version control?  This definitely a common problem I’ve seen, in particular for configuration and data loading scripts.  However, this isn’t particularly difficult to solve from a technical perspective (more on the people-side in a future post!).  Get everything managed by a version control tool like Git.

Depending on the SCM tool you use, it may not be appropriate to feel obliged to use one repository.  If you do use multiple, most continuous integration tools (e.g. Jenkins) can be set up in such a way as to support handling builds that consume from multiple repositories.  If you are using Git, you can even handle this complexity within your version control repository e.g. by using sub-modules.

 

What about if your solution includes multiple applications like the following?

complex

Suddenly our beautiful pipeline metaphor is broken and we have a network of pipelines that need to converge (analogous to fan in in electronics).  This is far from a rarity and I would say it is overwhelmingly the norm.  This certainly makes things more difficult and we now have to carefully consider how our plumbing is going to work.  We need to build what I call an “integrated pipeline”.

Designing an integrated pipeline is all about determining the “points of integration” aka POI i.e. the first time that testing involves the combination two or more components.  At this point, you need to record the versions of each component so that they are kept consistent for the rest of the pipeline.  If you fail to do this, earlier quality gates in the pipeline are invalidated.

In the below example, Applications A and B have their own CD pipelines where they will be deployed to independent test environments and face a succession of independent quality gates.  Whenever a version of Application A or B gets to the end of its respective pipeline, instead of going into production, it moves into the Integrated Pipeline and creates a new integrated or composite build number.  After this “POI” the applications progress towards production in the same pipeline and can only move in sync.  In the diagram, version A4 of Application A and version B7 of B have made it into integration build I8.  If integration build I8 makes it through the pipeline it will be worthy to progress to production.

intDepending on the tool you use for orchestration, there are different solutions for achieving the above.  Fundamentally it doesn’t have to be particularly complicated.  You are simply aggregating version numbers in which can easily be stored together in a text document in any format you like (YAMLPOMJSON etc).

Some people reading this may by now be boiling up inside ready to scream “MICRO SERVICES” at their screens.  Micro services are by design independently deploy-able services.  The independence is achieved by ensuring that they fulfill and expect to consume strict contract APIs so that integration with other services can be managed and components can be upgraded independently.  A convention like SemVer can be adopted to manage change to contract compatibility.  I’ve for a while had this tagged in my head as the eBay way or Amazon way of doing this but micro services are now gaining a lot of attention.  If you are implementing micro services and achieving this independence between pipelines, that’s great.  Personally on the one micro services solution I’ve worked on so far, we still opted for an integrated pipeline that operated on an integrated build and produce predictable upgrades to production (we are looking to relax that at some point in the future).

Depending on how you are implementing your automated deployment, you may have deployment automation scripts that live separately to your application code.  Obviously we want to use consistent version of these through out deployments to different environments in the pipeline.  Therefore I strongly advise managing these scripts as a component in the same manner.

What about if you are not using a PaaS?  In my experience, this represents the vast majority of solutions I’ve worked on.  If you are not deploying into a fully managed container, you have to care about the version of the environment that you are deploying into.  The great thing about treating infrastructure as code (assuming you overcome that associated impedance) is that you can treat it like an application, give it a pipeline and feed it into the integrated pipeline (probably at a POI very early).  Effectively you are creating your own platform and performing continuous delivery on that.  Obviously the further your production environment is from being a version-able component like this, the great the manual effort to keep environments in sync.

paas

 

Coming soon: more sources of impedance to doing continuous delivery: Software packages, Organisation size, Organisation structure, etc.

 

(Thanks to Tom Kuhlmann for the graphic symbols.)

 

DevOps – have fun and don’t miss the point!

DevOps is a term coined in 2009 which has since evolved into a movement of people passionate about a set of practices that enable companies to successfully manage rapidly changing IT services by bridging the gap between Development and Operations teams.

In my experience DevOps is very rewarding, so in this blog I’m going to try to bring its practical side to life.   I hope my post may help a few people connect DevOps to things they are already doing and even better, form some ideas about how to go further.

Brief Definition

The basic premise of DevOps is that most organisations with IT functions have separate teams responsible for software Development and for IT Operations (aka Service Management.) The observation is that separation of duty can have a negative side effect of inefficiency, unresponsiveness and unhappiness.

If your company is meeting its business objectives and you don’t have any sense for this pain, you’ve probably read enough!  Put the blog down, get yourself an ‘I’m rocking the DevOps’ t-shirt and leave me some comments containing your secrets so we can all learn from you!

A lot has been written that analyses the misery of Development versus Operations, so I’ll keep it brief.

The problems stem from the natural tension between Developers (who deliver changes) and Operators (who are incentivised to maximise service availability and hence prevent changes along with other such obvious threats.)  Add to the mix some wholesome but local optimisation (performed independently by each team) and you may start having ‘annual budget’ loads of completed yet unreleased features and stable but stagnated live service.  Periodically the Business teams mandate a ‘go live’ thus blowing up the dam and drowning live service in changes, some of which has actually gone quite toxic…

In the ideal world developers should be assured of a smooth, almost instantaneous transition of their code into production.  Operations staff should be assured of receiving stable and operable applications, and of having sufficient involvement in all changes that will impact service (in particular altered hosting requirements.)

I think of DevOps as a campaign which we can all join to maximise the throughput of successful IT changes where we work.  I’m personally ready to trust business teams to ensure we are choosing to deliver the right changes!

Where to Find the Fun

People and processes…, blah-dee-blah…  Ok, fair enough, DevOps does revolve around people, but we all like technology, so let’s start with the fun parts and come back to people and processes later!

Typical DevOps Technical Concerns include:

  • Configuration Management
  • Application Lifecycle Management
  • Environment Management
  • Software and Infrastructure Automation.

Where I work we have a well-defined technology architecture methodology which neatly classifies these concerns Development Architecture (aka DevArch).

Q. Does taking very seriously the above concerns mean you that are doing DevOps?
It helps, it’s a good start.  But it’s also key that the above concerns are consistently understood and are important priorities to both Development and Operations.

Caring about these concerns alone isn’t enough, time to tool them up!

The great news is that the popularity of DevOps has made the tooling space awash with fantastic innovators of excellent tooling. There are too many to mention but current favourites of mine include the orchestration tool Jenkins, the version control tool Git, the code quality framework and dashboard Sonar, and the automated configuration management tool Chef, and the VM manager Vagrant.

We are also now at the point where freely available open source tools (including all the above) exceed the capability of many commercial alternatives (hasta la vista ClearCase!).

Q. Does using some or all of the above types of tool mean you are doing DevOps? 
Not necessarily, but they help a lot.  Especially if they are equally prominent to and shared by Development and Operations.

Perhaps we need to add automation – lovely stuff like automated builds, automated deployments, automated testing, automated infrastructure.

In more good news, the understanding and appreciation of software delivery automation has gone up massively in the last few years.  Gradually this has reduced fear (and denial!) and increased management support and budgets, and also raised expectations!  I must thank misters Humble and Farley in particular for cementing a usable meaning of continuous delivery and creating the excellent delivery/deployment pipeline pattern for orchestrating automation.

Q. Does doing continuous integration and/or having a delivery pipeline mean you are doing DevOps?
It helps a lot.  But assuming (as every pipeline I’ve worked with so far) your pipeline doesn’t implement zero touch changes from check-in to production, Operations will still be prominent in software releases.  And of course their involvement continues long after deployments and throughout live service.  One development team’s automated deployment orchestrator might be one Operation team’s idea of malware!  Automation alone will certainly not eliminate the tension caused by the opposing relationships to change.

The Point (Don’t Miss It)

Ok technology over, let’s talk directly about people as processes.  The key point is that to globally optimise your organisation’s ability to deliver successful software changes, you have to think across both Development and Operations teams.  -It’s even nearly in the name DevOps.

Development teams optimising software delivery processes in a silo (for example with Agile) will not help Operations team accommodate shorter release cycles  Operations teams optimising risk assessment processes (for example wit ITIL) around “black box” software releases with which they have had little involvement will draw the only logical conclusion that changes should be further inhibited.

Operations teams optimising systems administration with server virtualisation and automated configuration management (e.g. Puppet) will not help development and testing productivity if they are not understood and adopted in earlier lifecycle stages by Development teams.

Development teams optimising code deployment processes with shiny automated deployment tools achieve diminished returns and in fact increased risk if the processes are not approved for use in Production.

There is no substitution for uniting over the common goal of improving release processes. Tools and automation are a lot of fun, but doing these in silos will not achieve what DevOps really promises. Collaboration is paramount, don’t miss the point!