Using ADOP and Docker to Learn Ansible

As I have written here, the DevOps Platform (aka ADOP) is an integration of open source tools that is designed to provide the tooling capability required for Continuous Delivery.  Through the concept of cartridges (plugins) ADOP also makes it very easy to re-use automation.

In this blog I will describe an ADOP Cartridge that I created as an easy way to experiment with Ansible.  Of course there are many other ways of experimenting with Ansible such as using Vagrant.  I chose to create an ADOP cartridge because ADOP is so easy to provision and predictable.  If you have an ADOP instance running you will be able to experience Ansible doing various interesting things in under 15 minutes.

To try this for yourself:

  1. Spin up and ADOP instance
  2. Load the Ansible 101 Cartridge (instructions)
  3. Run the jobs one-by-one and in each case read the console output.
  4. Re-run the jobs with different input parameters.

To anyone only loosely familiar with ADOP, Docker and Ansible, I recognise that this blog could be hard to follow so here is a quick diagram of what is going on.

docker-ansible

The Jenkins Jobs in the Cartridge

The jobs do the following things:

As the name suggests, this job just demonstrates how to install Ansible on Centos.  It installs Ansible in a Docker container in order to keep things simple and easy to clean up.  Having build a Docker image with Ansible installed, it tests the image just by running inside the container.

$ ansible --version

2_Run_Example_Adhoc_Commands

This job is a lot more interesting than the previous.  As the name suggests, the job is designed to run some adhoc Ansible commands (which is one of the first things you’ll do when learning Ansible).

Since the purpose of Ansible is infrastructure automation we first need to set up and environment to run commands against.  My idea was to set up an environment of Docker containers pretending to be servers.  In real life I don’t think we would ever want Ansible configuring running Docker containers (we normally want Docker containers to be immutable and certainly don’t want them to have ssh access enabled).  However I felt it a quick way to get started and create something repeatable and disposable.

The environment created resembles the diagram above.  As you can see we create two Docker containers (acting as servers) calling themselves web-node and one calling it’s self db-node.  The images already contain a public key (the same one vagrant uses actually) so that they can be ssh’d to (once again not good practice with Docker containers, but needed so that we can treat them like servers and use Ansible).  We then use an image which we refer to as the Ansible Control Container.  We create this image by installing Ansible installation and adding a Ansible hosts file that tells Ansible how to connect to the db and web “nodes” using the same key mentioned above.

With the environment in place the job runs the following ad hoc Ansible commands:

  1. ping all web nodes using the Ansible ping module: ansible web -m ping
  2. gather facts about the db node using the Ansible setup module: ansible db -m setup
  3. add a user to all web servers using the Ansible user module:  ansible web -b -m user -a “name=johnd comment=”John Doe” uid=1040″

By running the job and reading the console output you can see Ansible in action and then update the job to learn more.

3_Run_Your_Adhoc_Command

This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of having the hard-coded ad hoc Ansible commands listed above, it allows you to enter your own commands when running the job.  By default it pings all nodes:

ansible all -m ping

4_Run_A_Playbook

This job is identical to the job above in terms of setting up an environment to run Ansible.  However instead of passing in an ad hoc Ansible command, it lets you pass in an Ansible playbook to also run against the nodes.  By default the playbook that gets run installs Apache on the web nodes and PostgreSQL on the db node.  Of course you can change this to run any playbook you like so long as it is set to run on a host expression that matches: web-node-1, web-node-2, and/or db-node (or “all”).

How the jobs 2-4 work

To understand exactly how jobs 2-4 work, the code is reasonably well commented and should be fairly readable.  However, at a high-level the following steps are run:

  1. Create the Ansible inventory (hosts) file that our Ansible Control Container will need so that it can connect (ssh) to our db and web “nodes” to control them.
  2. Build the Docker image for our Ansible Control Container (install Ansible like the first Jenkins job, and then add the inventory file)
  3. Create a Docker network for our pretend server containers and our Ansible Control container to all run on.
  4. Create a docker-compose file for our pretend servers environment
  5. Use docker-compose to create our pretend servers environment
  6. Run the Ansible Control Container mounting in the Jenkins workspace if we want to run a local playbook file or if not just running the ad hoc Ansible command.

Conclusion

I hope this has been a useful read and has clarified a few things about Ansible, ADOP and Docker.  If you find this useful please star the GitHub repo and or share a pull request!

Bonus: here is an ADOP Platform Extension for Ansible Tower.

Advertisements

Running the DevOps Platform on Microsoft Azure

As per my last post about GCE sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on the Microsoft Azure cloud.  Warning thanks to Docker Machine, this post is very similar to this earlier one.

1. I needed an Azure account.

2. I logged into my Azure account and didn’t click “view the new Portal”.

3. On the left hand menu, I scrolled down to the bottom (it didn’t look immediately to me like it will scroll so hover) and clicked settings.  Here I was able to see my subscription ID and copy it.

4. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver azure --azure-size Standard_A3 --azure-subscription-id <the ID I just copied> markos01

I was prompted to open a url in my brower, enter a confirmation code, and then login with my Azure credentials.  Credit to Microsoft, this was easier than GCE for which I needed to install the gcloud commandline utility!

You will notice that this is fairly standard.  I picked an Standard_A3 machine type which is roughly equivalent to what we use for AWS and GCP.

5. I waited while a machine was created in Azure containing Docker

6. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

7. I ran the normal startup.sh command as follows:

$ ./startup.sh -m markos01 -c NA

And entered a user name (thanks to this recent enhancement), hey presto

...
SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval \"$(docker-machine env $MACHINE_NAME)\"
source env.config.sh
Navigate to http://52.160.97.159 in your browser to use your new DevOps Platform!

And just to prove it:

$ whois 52.160.97.159 | grep Org
Organization: Microsoft Corporation (MSFT)
OrgName: Microsoft Corporation
OrgId: MSFT

8. I had to go to All resources > markos01-firewall > Inbound security rules and added a rule to allow HTTP to my server on port 80.

9. I viewed my new ADOP on Azure hosted instance in (of course…) Chrome! 😉

More lovely stuff!

 

Running the DevOps Platform on Google Compute Engine

Sometimes knowing something is possible just isn’t good enough.  So here is how I spun up the DevOps Platform on Google Compute Engine (GCE).

1. I needed a Google Compute Engine account.

2. I enabled the Google Compute APIs for my GCE account

3. I installed the Google Cloud commandline API

4. I opened the Google Cloud SDK Shell link that had appeared in my Windows Start menu and ran:

C:\> gcloud auth login

This popped open a Chrome window and asked me to authenticate against my GCE account.

5. (Having previously installed Docker Toolbox, see here) I opened Git Bash (as an Administrator) and ran this command:

$ docker-machine create --driver google \
                 --google-project <a project in my GCE account> \
                 --google-machine-type n1-standard-2 \
                 markosadop01

You will notice that this is fairly standard.  I picked an n1-standard-2 machine type which is roughly equivalent to what we use for AWS.

6. I waited while a machine was created in Google containing Docker

7. I cloned the ADOP Docker Compose repository from GitHub:

$ git clone https://github.com/Accenture/adop-docker-compose
$ cd adop-docker-compose

8. I ran the normal startup.sh command as follows:

$ git clone https://github.com/Accenture/adop-docker-compose
$ ./startup.sh -m markosadop01 -c NA

And hey presto:

...
SUCCESS, your new ADOP instance is ready!
Run these commands in your shell:
eval "$(docker-machine env $MACHINE_NAME)"
source env.config.sh
Navigate to http://104.197.235.64 in your browser to use your new DevOps Platform!

And just to prove it:

$ whois 104.197.235.64 | grep Org
Registrant Organization: Google Inc.
Admin Organization: Google Inc.
Tech Organization: Google Inc.

9. I had to go to Networks > Firewall rules and added a rule to allow HTTP to my server.

10. I viewed my new ADOP on Google instance in (of course…) Chrome!

Lovely stuff!

Start Infrastructure Coding Today!

* Warning this post contains mildly anti-Windows sentiments *

It has never been easier to get ‘hands-on’ with Infrastructure Coding and Containers (yes including Docker), even if your daily life is spent using a Windows work laptop.  My friend Kumar and I proved this the other Saturday night in just one hour in a bar in Chennai.  Here are the steps we performed on his laptop.  I encourage you to do the same (with an optional side order of Kingfisher Ultra).

 

  1. We installed Docker Toolbox.
    It turns out this is an extremely fruitful first step as it gives you:

    1. Git (and in particular GitBash). This allows you to use the world’s best Software Configuration Management tool Git and welcomes you into the world of being able to use and contribute to Open Source software on Git Hub.  Plus it has the added bonus of turning  your laptop into something which understands good wholesome Linux commands.
    2. Virtual Box. This is a hypervisor that turns your laptop from being one machine running one Operating System (Windoze) into something capable of running multiple virtual machines with almost any Operating System you want (even UniKernels!).  Suddenly you can run (and develop) local copies of servers that from a software perspective match Production.
    3. Docker Machine. This is a command line utility that will create virtual machines for running Docker on.  It can do this either locally on your shiny new Virtual Box instance or remotely in the cloud (even the Azure cloud – Linux machines of course)
    4. Docker command line. This is the main command line utility of Docker.  This will enable you to download and build Docker images, and turn them into running Docker containers.  The beauty of the Docker command line is that you can run it locally (ideally in GitBash) on your local machine and have it control Docker running on a Linux machine.  See diagram below.
    5. Docker Compose. This is a utility that gives you the ability to run and associate multiple Docker containers by reading what is required from a text file.DockerVB
  2. Having completed step 1, we opened up the Docker Quickstart Terminal by clicking the entry that had appeared in the Windows start menu. This runs a shell script via GitBash that performs the following:
    1. Creates a virtual box machine (called ‘default’) and starts it
    2. Installs Docker on the new virtual machine
    3. Leaves you with a GitBash window open that has the necessary environment variables set to instruct point Docker command line utility to point at your new virtual machine.
  3. We wanted to test things out, so we ran:
    $ docker ps –a
    CONTAINER ID  IMAGE   COMMAND   CREATED   STATUS   PORTS  NAMES

     

    This showed us that our Docker command line tool was successfully talking to the Docker daemon (process) running on the ‘default’ virtual machine. And it showed us that no containers were either running or stopped on there.

  4. We wanted to testing things a little further so ran:
    $ docker run hello-world
     
    Hello from Docker.
    
    This message shows that your installation appears to be working correctly.
     
    
    To generate this message, Docker took the following steps:
    
    The Docker client contacted the Docker daemon.
    The Docker daemon pulled the "hello-world" image from the Docker Hub.
    The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
    
    The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
    
     
    
    To try something more ambitious, you can run an Ubuntu container with:
    
    $ docker run -it ubuntu bash
    
     
    
    Share images, automate workflows, and more with a free Docker Hub account:
    
    https://hub.docker.com
    
     
    
    For more examples and ideas, visit:
    
    https://docs.docker.com/userguide

     

    The output is very self-explanatory.  So I recommend reading it now.

  5. We followed the instructions above to run a container from the Ubuntu image.  This started for us a container running Ubuntu and we ran a command to satisfy ourselves that we were running Ubuntu.  Note one slight modification, we had to prefix the command with ‘winpty’ to work around a tty-related issue in GitBash
    $ winpty docker run -it ubuntu bash
    
    root@2af72758e8a9:/# apt-get -v | head -1
    
    apt 1.0.1ubuntu2 for amd64 compiled on Aug  1 2015 19:20:48
    
    root@2af72758e8a9:/# exit
    
    $ exit

     

  6. We wanted to run something else, so we ran:
    $ docker run -d -P nginx:latest

     

  7. This caused the Docker command line to do more or less what is stated in the previous step with a few exceptions.
    • The –d flag caused the container to run in the background (we didn’t need –it).
    • The –P flag caused docker to expose the ports of Nginx back to our Windows machine.
    • The Image was Nginx rather than Ubuntu.  We didn’t need to specify a command for the container to run after starting (leaving it to run its default command).
  8. We then ran the following to establish how to connect to our Nginx:
    $ docker-machine ip default
    192.168.99.100
    
     $ docker ps
    
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                           NAMES
    
    826827727fbf        nginx:latest        "nginx -g 'daemon off"   14 minutes ago      Up 14 minutes       0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp   ecstatic_einstein
    
    

     

  9. We opened a proper web brower (Chrome) and navigated to: http://192.168.99.100:32769/ using the information above (your IP address may differ). Pleasingly we were presented with the: ‘Welcome to nginx!’ default page.
  10. We decided to clean up some of what we’re created locally on the virtual machine, so we ran the following to:
    1. Stop the Nginx container
    2. Delete the stopped containers
    3. Demonstrate that we still had the Docker ‘images’ downloaded

 

$ docker kill `docker ps -q`

8d003ca14410
$ docker rm `docker ps -aq`

8d003ca14410

2af72758e8a9

…

$ docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

$ docker images

REPOSITORY                     TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

nginx                          latest              sha256:99e9a        4 weeks ago         134.5 MB

ubuntu                         latest              sha256:3876b        5 weeks ago         187.9 MB

hello-world                    latest              sha256:690ed        4 months ago        960 B

 

 

  1. We went back to Chrome and hit refresh. As expected Nginx was gone.
  2. We opened Oracle VM Virtual box from the Windows start machine so that we could observe our ‘default’ machine listed as running.
  3. We ran the following to stop our ‘default’ machine and also observed it then stopping Virtual Box:
    $ docker-machine stop default

     

  4. Finally we installed Vagrant. This is essentially a much more generic version of Docker-Machine that is capable of creating not just virtual machines in Virtual Box for Docker, but for many other purposes.  For example from an Infrastructure Coding perspective, you might run a virtual machine for developing Chef code.

 

Not bad for one hour on hotel wifi!

Kumar keenly agreed he would complete the following next steps.  I hope you’ll join him on the journey and Start Infrastructure Coding Today!

  1. Learn Git. It really only takes 10 minutes with this tutorial LINK to learn the basics.
  2. Docker – continue the journey here
  3. Vagrant
  4. Chef
  5. Ansible

 

Please share any issues following this and I’ll improve the instructions.  Please share  any other useful tutorials and I will add those also.

Reusable Docker Testing Approach

In this blog I will describe a reusable approach to testing Docker that I have been working on.

By ‘testing Docker’ I mean the performing the following actions:

  • Static code analysis of the Dockerfile i.e. is the file syntactically valid and written to our expected standards?
  • Unit testing the Docker Image created by performing a build with our Dockerfile i.e. does our Dockerfile look like it created the Image we were expecting?
  • Functional testing the Container created by running an instance of our container i.e. when running does it look and do as we expected?

I wanted a solution that was very easy to adopt and extend so I chose to:

  • implement it in Docker so that it will work for anyone using Docker (see this diagram)
  • use Docker Compose to make it as easy to trigger
  • reuse Dockerlint
  • use Ruby because it is fairly widespread as a required skill in infrastructure-as-code people (for now until Go takes over…), and because the docker-api Gem is very powerful, albeit expects you to learn more about the Docker API in order to use it.
  • use RSpec an ServerSpec as testing framework because they have good documentation and the support BDD

So what is the solution?

Essentially it is a Docker image called test-docker.  To use it, you must mount-in your ‘Dockerfile’ and your ‘tests’ directory, it then:

  1. Runs Dockerlint to perform the static code analysis on the Docker file
  2. Runs your tests which I encourage you write for both inspecting the image and testing a running container.

How to see it in action?

To run this you need Docker installed and functioning happily.  Personally I’m using:

  • a Windows laptop
  • Docker Toolbox which gave me: docker-machine which manages a Linux virtual machine for me running on a local installation of Virtual Box
  • docker-compose installed (I did it manually)
  • git bash aka Git For Windows as my terminal

With the above or equivalent, you simply need to do:

$ git clone https://github.com/kramos/test-docker.git
$ cd test-docker
$ docker-compose -f docker-compose-test-docker.yml up

You should see an output like this:

Creating testdocker
Creating testdocker_lintdocker_1
Attaching to testdocker, testdocker_lintdocker_1
testdocker   | /usr/local/bin/ruby -I/usr/local/bundle/gems/rspec-support-3.4.0/lib:/usr/local/bundle/gems/rspec-core-3.4.0/lib /usr/local/bundle/gems/rspec-core-3.4.0/exe/rspec --pattern spec/\*_spec.rb
lintdocker_1 | Check passed!
testdocker_lintdocker_1 exited with code 0
testdocker   |
testdocker   | Container
testdocker   |   get running
testdocker   |     check ruby
testdocker   |       Command "ruby --version"
testdocker   |         stdout
testdocker   |           should match /ruby/
testdocker   |         stderr
testdocker   |           should be empty
testdocker   |
testdocker   | Image
testdocker   |   inpsect metadata
testdocker   |     should not expose any ports
testdocker   |
testdocker   | Finished in 1.48 seconds (files took 1.45 seconds to load)
testdocker   | 3 examples, 0 failures
testdocker   |
testdocker exited with code 0
Gracefully stopping... (press Ctrl+C again to force)


 

All good.  But what happened?  Well everything I’ve said we wanted to happen, against the test-docker tool.  #Dogfood and all that.

You can also try out another example e.g.:

$ docker-compose -f examples/redis/docker-compose.yml up

So how to use this for your own work?

Hopefully you’ll agree this is very easy (at least to get started):

  1. Replace the Dockerfile in the root of the test-docker folder with your own Dockerfile (plus any other local resources your Dockerfile needs)
  2. Run the following (this time we allow docker-compose to use the default configuration file which you also pulled from Git:
$ docker-compose up
  1. You will find out what Dockerlint thinks of your code, followed by finding out whether by extreme luck any of the tests that were written for the test-docker image (as opposed to your image) pass
  2. Open the rb file (in tests/spec) and update it to test your application using anything that you can do to a stopped container using the docker-api
  3. Open the rb file (in tests/spec) and update it to test your application using anything that you can do to a running container using the docker-api and Serverspec.
  4. I suggest remove the .git folder and initialise a git repository to manage your Dockerfile and your tests.

 

Functional tests that run the container require two subtly different approaches according to whether your Docker image is expected to run as a daemon or just run, do something and stop.  In the former case, you can use a lot of Serverspec functionality.  In the latter, your choices are more limited to running the container multiple times and in each case grabbing the output and parsing it.

 

Conclusion

 

There were a surprising number of things I had to learn on the fly here to get this working, but I don’t want this blog to drag on.  Let me know how you get on and I will happily share more, especially when any of magic things don’t work as expected – for example when writing tests.

 

I’ll leave you with my current list of things I want to improve:

  • Make work with Ruby slim (the image is huge)
  • Get working with Inspec instead of ServerSpc
  • Provide better examples of tests
  • I should really draw a diagram to help anyone new to this understand all this inception computing…

 

Credits:

I took a huge amount of help from:
http://www.unixdaemon.net/tools/testing-dockerfiles-with-serverspec/
https://github.com/sherzberg/docker-hhvm/
https://github.com/18F/docker-elasticsearch

 

Docker Inception


Sometimes when you are working with Docker it feels a bit like the movie inception  It doesn’t help when you are doing things like this.  So here is a diagram that might make things clearer.

docker-inception

New Directions in Operating Systems: Designer Cows and Intensive Farming

On Tuesday I attended the inaugural New Directions in Operating Systems conference in London which was excellently organised by Justin Cormack, and sponsored by :ByteMark (no relation to me!) and RedHat.

I attended with the attitude that since things seem to be moving so fast in this space (e.g. Docker), there would be a high chance that I would get glimpses of the not-too-distant future.  I was not disappointed.

I’m not going to cover every talk in detail. For that, I recommend reading this which someone from Cambridge somehow managed to live blog.  Also most of the presentation links are now up here and I expect videos will follow.

Instead, here two main highlights that I took away.

Designer Cows

If we are aspiring to treat our servers as cattle (as per the popular metaphor from CERN), a number of talks were (to me) about how to make better cows.

The foundation of all solutions was unanimously either bare metal or a XEN hypervisor.  As per the name, this wasn’t the conference for talking about the Open Compute Project or advances like AWS have made recently with C4.  We can think about the hypervisor as our “field” in the cow metaphor.

For the sake of my vegetarian and Indian friends, let’s say the reason for owning cows is to get milk.  Cows (like the servers we use today) have evolved significantly (under our influence) and cows are now very good at producing milk.  But they still have a lot of original “features” not all of which may directly serve our requirements.  For example they can run, they can moo, they can hear etc.

A parallel can be drawn to a web server which similarly may possess its own “redundant features”.  Most web servers can talk the language required by printers, support multiple locales and languages, and can be connected to using a number of different protocols.  There could even be redundancy in traditional “core features” for example supporting multiple users, multiple threads, or even virtual memory.

The downside of all of this redundancy is not just efficiency of storage, processing and maintenance, it’s also the impact on security.  All good sysadmins will understand that unnecessary daemons running on a box (e.g. hopefully even sshd) expand your attack surface by exposing additional attack vectors.  This principle can be taken further.  As Antti Kantee said, drivers can add millions of lines of code to your server.  Every one of these presents the potential to a security defect.

Robert Watson was amongst others to quote Bruce Schneier:

Defenders have to protect against every possible vulnerability, but an attacker only has to find one security flaw to compromise the whole system.

With HeartBleed, Shellshock and Poodle all in the last few months, this is clearly needs some serious attention (for the good of all of us!).

To address this we saw demonstrations of:

  • Rump Kernels which are stripped down to only include filesystems, TCP/IP, device calls and system calls, device drivers.  But no threads, locking, scheduling etc.  So this is more of a basis from which to build a cow but not a working one.
  • Unikernels where all software layers are from the same language framework.  So this is building a whole working cow from bespoke parts with no redundant parts (ears etc!).
  • RumpRun for building Unikernels for running a POSIX application (mathopd a http server in the example), i.e. taking a Rump Kernel and building it into a single application kernel for one single job application.  So another way to build a bespoke cow.
  • MirageOS a programming language for building type-safe Unikernels.  So another way to build a bespoke very safe cow.
  • GenodeOS a completely new operating system taking a new approach to delegating trust through the application stack.  So to some extent a new animal that produces milk with a completely re-conceived anatomy.

Use cases range from the “traditional” like building normal (but much more secure) servers to completely novel such as very light-weight and short-lived servers to start up almost instantaneously, do something quickly and disappear.

Docker and CoreOs with its multi virtual machine-aware and now stripped down container ready functionality were mentioned.  However, whilst CoreOs is as smaller attack surface, if you are running a potentially quite big Docker container on it, you may be adding back lots security vectors.  Possibly as the dependency resolution algorithm for Docker improves, this will progressively reduce the size of Docker containers and hence the number of lines of code and potential vulnerabilities included.

Intensive Farming

Two presentations of the day stood out for generally focussing on a different (and to me more recognisable) level of problem.  First was Gareth Rushgrove’s about Configuration Management.  He covered a very wide range of concepts and tools focused on the management of the configuration of single and fleets of servers over time rather than novel ways to construct operating systems.  He made the statement:

If servers are cattle not pets, we need to talk about fields and farms

Which inspired the title of this blog and led to some discussion (during the presentation and on Twitter) about using active runtime Configuration Management tools like Puppet to manage the adding and removing of infrastructure resources over time.  Even if most of your servers are immutable, it’s quite appealing to think Puppet or Chef could manage both what servers exist and the state of those those more pet-like creatures that do have change over time (in the applications that most good “horse” organisations run).

Whilst if you are using AWS CloudFormation it can provision and update your environment (so called Stack), resultant changes may be heavy-handed and this is clearly a single cloud provider solution.  Terraform is multi cloud provider solution to consider and supports a good preview mode, but doesn’t evolve the configuration on your servers over time.

Gareth also mentioned:

  • OSv which at first I thought was just a operating system query engine like Facebook’s osquery.  But it appears to be a fully API driven operating system.
  • Atomic and OSTree which Michael Scherer covered in the next talk. These look like very interesting solutions for providing confidence and integrity in those bits of the application and operating system that aren’t controlled by Chef, Puppet or DockerFile.

I really feel like I’ve barely done justice to describing even 20% of this excellent conference.  Look out for the videos and look out for the next event.

No animals were harmed during the making of the conference or this blog.