Proposed Reference Architecture of a Platform Application (PaaA)

In this blog I’m going to propose a design for modelling a Platform Application as a series of generic layers.

I hope this model will be useful for anyone developing and operating a platform, in particular if they share my aspirations to treat the Platform as an Application and to:

Hold your Platform to the same engineering standards as a you would (should?!) your other applications

This is my fourth blog in a series where I’ve been exploring treating our Platforms as an Application (PaaA). My idea is simple, whether you are re-using a third Platform Application (e.g. Cloud Foundry) or rolling your own, you should:

  • Make sure it is reproducible from version control
  • Make sure you test it before releasing changes
  • Make sure you release only known and tested and reproducible versions
  • Industrialise and build a Continuous Delivery pipeline for you platform application
  • Industrialise and build a Continuous Delivery pipeline within your platform.

As I’ve suggested, if we are treating Business Applications as Products, we should also treat our Platform Application as a Product.  With approach in mind, clearly a Product Owner of a Business Application (e.g. a website) is not going be particularly interested in detail about how something like high-availability works.

A Platform Application should abstract the applications using it from many concerns which are important but not interesting to them.  

You could have a Product owner for the whole Platform Application, but that’s a lot to think about so I believe this reference architecture is a useful way to divide and conquer.  To further simply things, I’ve defined this anatomy in layers each of which abstracts next layer from the underlying implementation.

So here is it is:



Starting from the bottom:

  • Hardware management 
    • Consists of: Hypervisor, Logical storage managers, Software defined network
    • The owner of this layer can makes the call: “I’ll use this hardware”
    • Abstracts you from: the hardware and allows you two work logically with compute, storage and network resources
    • Meaning: you can: within the limits of this layer e.g. physical capacity or performance consider hardware to be fully logical
    • Presents to the next layer: the ability to work with logical infrastructure
  • Basic Infrastructure orchestration 
    • Consists of: Cloud console and API equivalent. See this layer as described in Open Stack here
    • The owner of this layer can make the call: “I will use these APIs to interact with the Hardware Management layer.”
    • Abstracts you from: having to manually track usage levels of compute and storage. Monitor the hardware.
    • Meaning you can: perform operations on compute and storage in bulk using an API
    • Presents to the next layer: a convenient way to programmatically make bulk updates to what logical infrastructure has been provisioned
  • Platform Infrastructure orchestration (auto-scaling, resource usage optimisation)
    • Consists of: effectively a software application built to manage creation of the required infrastructure resources required. Holds the logic required for auto-scaling, auto-recovery and resource usage optimisation
    • The owner of this later can make the call: “I need this many servers of that size, and this storage, and this network”
    • Abstracts you from: manually creating the scaling required infrastructure and from changing this over time in response to demand levels
    • Meaning you can: expect that enough logical infrastructure will always be available for use
    • Presents to the next layer: the required amount of logical infrastructure resources to meet the requirements of the platform
  • Execution architecture 
    • Consists of: operating systems, containers, and middleware e.g. Web Application Server, RDBMS
    • The owner of this later can make the call: “This is how I will provide the runtime dependences that the Business Application needs to operate”
    • Abstracts you from: the software and configuration required your application to run
    • Meaning you can: know you have a resource that could receive release packages of code and run them
    • Presents to the next layer: the ability to create the software resources required to run the Business Applications
  • Logical environment separation
    • Consists of: logically separate and isolated instances of environments that can use to host a whole application by providing the required infrastructure resources and runtime dependencies
    • The owner of this layer can make the call: “This is what an environment consists of in terms of different execution architecture components and this is the required logical infrastructure scale”
    • Abstracts you from: working out what you need to create fully separate environments
    • Meaning you can: create environments
    • Presents to the next layer: logical environments (aka Spaces) where code can be deployed
  • Deployment architecture
    • Consists of: the orchestration and automation tools required release new Business Application releases to the Platform Application
    • The owner of this layer can make the call: “These are the tools I will use to deploy the application and configure it to work in the target logical environment”
    • Abstracts you from: the details about how to promote new versions of your application, static content, database and data
    • Meaning you can: release code to environments
    • Presents to the next layer: a user interface and API for releasing code
  • Security model
    • Consists of: a user directory, an authentication mechanism, an authorisation mechanism
    • The owner of this later can make the call: “These authorised people can do the make the following changes to all layers down to Platform Infrastructure Automation”
    • Abstracts you from: having to implement controls over platform use.
    • Meaning you can: empower the right people and be protected from the wrong people
    • Makes the call: “I want only authenticated and authorised users to be able to use my platform application”

I’d love to hear some feedback on this.  In the meantime, I’m planning to map some of the recent projects I’ve been involved with into this architecture to see how well they fit and what the challenges are..


DevOps – have fun and don’t miss the point!

DevOps is a term coined in 2009 which has since evolved into a movement of people passionate about a set of practices that enable companies to successfully manage rapidly changing IT services by bridging the gap between Development and Operations teams.

In my experience DevOps is very rewarding, so in this blog I’m going to try to bring its practical side to life.   I hope my post may help a few people connect DevOps to things they are already doing and even better, form some ideas about how to go further.

Brief Definition

The basic premise of DevOps is that most organisations with IT functions have separate teams responsible for software Development and for IT Operations (aka Service Management.) The observation is that separation of duty can have a negative side effect of inefficiency, unresponsiveness and unhappiness.

If your company is meeting its business objectives and you don’t have any sense for this pain, you’ve probably read enough!  Put the blog down, get yourself an ‘I’m rocking the DevOps’ t-shirt and leave me some comments containing your secrets so we can all learn from you!

A lot has been written that analyses the misery of Development versus Operations, so I’ll keep it brief.

The problems stem from the natural tension between Developers (who deliver changes) and Operators (who are incentivised to maximise service availability and hence prevent changes along with other such obvious threats.)  Add to the mix some wholesome but local optimisation (performed independently by each team) and you may start having ‘annual budget’ loads of completed yet unreleased features and stable but stagnated live service.  Periodically the Business teams mandate a ‘go live’ thus blowing up the dam and drowning live service in changes, some of which has actually gone quite toxic…

In the ideal world developers should be assured of a smooth, almost instantaneous transition of their code into production.  Operations staff should be assured of receiving stable and operable applications, and of having sufficient involvement in all changes that will impact service (in particular altered hosting requirements.)

I think of DevOps as a campaign which we can all join to maximise the throughput of successful IT changes where we work.  I’m personally ready to trust business teams to ensure we are choosing to deliver the right changes!

Where to Find the Fun

People and processes…, blah-dee-blah…  Ok, fair enough, DevOps does revolve around people, but we all like technology, so let’s start with the fun parts and come back to people and processes later!

Typical DevOps Technical Concerns include:

  • Configuration Management
  • Application Lifecycle Management
  • Environment Management
  • Software and Infrastructure Automation.

Where I work we have a well-defined technology architecture methodology which neatly classifies these concerns Development Architecture (aka DevArch).

Q. Does taking very seriously the above concerns mean you that are doing DevOps?
It helps, it’s a good start.  But it’s also key that the above concerns are consistently understood and are important priorities to both Development and Operations.

Caring about these concerns alone isn’t enough, time to tool them up!

The great news is that the popularity of DevOps has made the tooling space awash with fantastic innovators of excellent tooling. There are too many to mention but current favourites of mine include the orchestration tool Jenkins, the version control tool Git, the code quality framework and dashboard Sonar, and the automated configuration management tool Chef, and the VM manager Vagrant.

We are also now at the point where freely available open source tools (including all the above) exceed the capability of many commercial alternatives (hasta la vista ClearCase!).

Q. Does using some or all of the above types of tool mean you are doing DevOps? 
Not necessarily, but they help a lot.  Especially if they are equally prominent to and shared by Development and Operations.

Perhaps we need to add automation – lovely stuff like automated builds, automated deployments, automated testing, automated infrastructure.

In more good news, the understanding and appreciation of software delivery automation has gone up massively in the last few years.  Gradually this has reduced fear (and denial!) and increased management support and budgets, and also raised expectations!  I must thank misters Humble and Farley in particular for cementing a usable meaning of continuous delivery and creating the excellent delivery/deployment pipeline pattern for orchestrating automation.

Q. Does doing continuous integration and/or having a delivery pipeline mean you are doing DevOps?
It helps a lot.  But assuming (as every pipeline I’ve worked with so far) your pipeline doesn’t implement zero touch changes from check-in to production, Operations will still be prominent in software releases.  And of course their involvement continues long after deployments and throughout live service.  One development team’s automated deployment orchestrator might be one Operation team’s idea of malware!  Automation alone will certainly not eliminate the tension caused by the opposing relationships to change.

The Point (Don’t Miss It)

Ok technology over, let’s talk directly about people as processes.  The key point is that to globally optimise your organisation’s ability to deliver successful software changes, you have to think across both Development and Operations teams.  -It’s even nearly in the name DevOps.

Development teams optimising software delivery processes in a silo (for example with Agile) will not help Operations team accommodate shorter release cycles  Operations teams optimising risk assessment processes (for example wit ITIL) around “black box” software releases with which they have had little involvement will draw the only logical conclusion that changes should be further inhibited.

Operations teams optimising systems administration with server virtualisation and automated configuration management (e.g. Puppet) will not help development and testing productivity if they are not understood and adopted in earlier lifecycle stages by Development teams.

Development teams optimising code deployment processes with shiny automated deployment tools achieve diminished returns and in fact increased risk if the processes are not approved for use in Production.

There is no substitution for uniting over the common goal of improving release processes. Tools and automation are a lot of fun, but doing these in silos will not achieve what DevOps really promises. Collaboration is paramount, don’t miss the point!

Book Review: The Phoenix Project

Earlier this year the “DevOps” movement hit a new milestone with the publication of the first novel on the subject (yes as in an entertaining work fiction).

The Phoenix Project: A Novel About IT, DevOps and Helping Your Business Win…

If you can’t be bothered to read this whole review, then my advice is to buy it. Just don’t then blame if you don’t like it… you should have read the whole review.

To anyone familiar with the Eliyahu M. Goldratt’s “The Goal”, The Phoenix Project will feel pleasantly familiar.…

To anyone unfamiliar with The Goal, it is basically the crusade of a middle manager faced with the challenge of turning around a failing manufacturing plant to save it from closure. This challenge is supported by a quirky physicist advisor who uses the Socratic method to reveal how to apply scientific reasoning in favour of conventional manufacturing processes and economics. Throughout The Goal book, there are lots of simple models designed to explain the principles and teach you something. It makes you feel good whilst you are reading it, but at the end a little uncertain whether you’ve actually learnt anything you can apply in the real world.

Modernise the hero and substitute their dysfunctional manufacturing plant for a dysfunctional IT Operations team, and you aren’t far off The Phoenix Project. In fact it is almost a sequel in The Goal series. A manufacturing plant which could easily have been from the The Goal is used heavily in The Phoenix Project to highlight what manufacturing can teach IT. – This is a great metaphor that I definitely subscribe to.

So is The Phoenix Project entertaining and do you actually learn anything?

I certainly found it highly entertaining, the observations were very sharp and definitely reminiscent of things I’ve seen. There are plenty of familiar examples of poor decisions about trying to go too fast at the expense of quality and stability, unpredictability and mayhem. All exciting stuff to a DevOps freak.

Do you learn anything from the Phoenix Project? Perhaps mostly just through re-evaluating your own experiences. There isn’t a huge amount of detailed substance on DevOps implementation in the book and in fact, it appears to be a good plug for the author’s next book, the DevOps Cookbook:
Really looking forward to that!!

In summary, personally I recommend reading either the Phoenix Project or the Goal and I eagerly await the Cookbook.