1-Minute Multi-Node VM Setup


Quickly spin up multiple VMs with useful DNSs on your local machine and automate complex environments easily.

Here’s a video:


Maintaining Docker at scale, I’m more frequently concerned with clusters of VMs than the containers themselves.

The irony of this is not lost on me.

Frequently I need to spin up clusters of machines. Either this is very slow/unreliable (Enterprise OpenStack implementation) or expensive (Amazon).

The obvious answer to this is to use Vagrant, but managing this can be challenging.

So I present here a very easy way to set up a useful Vagrant cluster. With this framework, you can then automate your ‘real’ environment and play to your heart’s content.

$ pip install shutit
$ shutit skeleton
# Input a name for this module.
# Default: /Users/imiell/shutit_resins
[hit return to take default]
# Input a ShutIt pattern.
Default: bash
bash: a shell script
docker: a docker image build
vagrant: a vagrant setup
docker_tutorial: a docker-based tutorial
shutitfile: a shutitfile-based project
[type in vagrant]
How many machines do you want (default: 3)? 3
[hit return to take default]
What do you want to call the machines (eg superserver) (default: machine)?
[hit return to take default]
Do you want to have open ssh access between machines? (default: yes) yes
Initialized empty Git repository in /Users/imiell/shutit_resins/.git/
Cloning into ‘shutit-library’...
remote: Counting objects: 1322, done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 1322 (delta 20), reused 0 (delta 0), pack-reused 1289
Receiving objects: 100% (1322/1322), 1.12 MiB | 807.00 KiB/s, done.
Resolving deltas: 100% (658/658), done.
Checking connectivity… done.
# Run:
cd /Users/imiell/shutit_resins && ./run.sh
to run.
[follow the instructions to run up your cluster.
$ cd /Users/imiell/shutit_resins && ./run.sh

This will automatically run up an n-node cluster and then finish up.

NOTE: Make sure you have enough resources on your machine to run this!

BTW, if you re-run the run.sh it automatically clears up previous VMs spun up by the script to prevent your machine grinding to a halt with old machines.

Going deeper

What you can do from there is automate the setup of these nodes to your needs.

For example:

def build(self, shutit):
[... go to end of this function ...]
# Install apache
    shutit.login(command='vagrant ssh machine1')
    shutit.login(command='sudo su - ')
# Go to machine2 and call machine1's server
    shutit.login(command='vagrant ssh machine2')
    shutit.login(command='sudo su -')
    shutit.send('curl machine1.vagrant.test')

Will set up an apache server and curl a request to the first machine from the second.


This is obviously a simple example. I’ve used this for these more complex setups which are can be instructive and useful:

Chef server and client

Creates a chef server and client.

Docker Swarm

Creates a 3-node docker swarm

OpenShift Cluster

This one sets up a full OpenShift cluster, setting it up using the standard ansible scripts.

Automation of an etcd migration on OpenShift

This branch of the above code sets up OpenShift using the alternative Chef scripts, and migrates an etcd cluster from one set of nodes to another.

Docker Notary

Setting up of a Docker notary sandbox.

Help Wanted

If you have a need for an environment, or can improve the setup of any of the above please let me know: @ianmiell

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Migrating an OpenShift etcd Cluster


Following on from my previous post setting up an OpenShift cluster in Vagrant, this post discusses migrating an etcd cluster within a live OpenShift instance to newer servers.

Moving a standalone etcd cluster is relatively straightforward, but when it’s part of an OpenShift cluster — and especially one that’s live and operational — it is a little more involved.

The ordering of actions is important and there are several aspects to consider when planning such a move:

  • Config management preparation
  • Stopping the cluster
  • Creation and distribution of certificates
  • Data migration
  • Update of OpenShift config
  • Update of config management

Here we are using Ansible to provision and maintain the environment.

You can also use Chef to manage your OpenShift cluster.


The code for this is here:


Here’s a video of the upgrade process:


VM Setup

This section of the code sets up the VMs using Vagrant.

Cluster Setup

The next section sets up the OpenShift cluster. It:

  • sets up ssh access across all the hosts
  • writes the ansible hosts config file
  • triggers the ansible playbook

Take a Backup

Take a backup of etcd on all three nodes, just in case.

Stop the Cluster

Generate New Certs

For each new node, run the commands to generate the certs for the new nodes, and copy to the codes.

Add etcd Nodes One-By-One

Again for each node:

  • add the new node to the etcd cluster
  • go to the node
  • install etcd
  • extract the certificates
  • update the etcd config
  • restart etcd

NOTE: If you have a lot of data in your cluster, you will want to give the new node ample time to receive the data from the other nodes. In this trivial example, there is little data to transfer. Alternatively, you can copy over the data from one of the original nodes.

Drop the Old Members

Now drop the old members from the cluster and remove etcd from those hosts:

Update the Master Config and Bring the OpenShift Cluster Back Up

The /etc/origin/master/master-config.yaml file needs to updated to reflect the new etcd cluster before bringing back the OpenShift cluster.

Update Config Manager and Re-Run

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

A Complete OpenShift Cluster on Vagrant, Step by Step


Following on from my Kubernetes post here, I have automated an OpenShift Origin cluster using the same tools.


Here is a video of the whole process.

It gets (relatively) interesting later on, as a lot of the process is Vagrant starting up and yum installs failing on bad mirrors. Also, Ansible needs to be run several times for it to work (I suspect due to resource limitations, see Gotchas below).


Here is a layout of the VMs. The host uses the landrush plugin to allow transparent DNS lookup from the host, and between boxes.

OpenShift Vagrant Cluster VM Layout


The code is here:

Run Yourself

You will need at least 6.5G spare memory (maybe more) on your host. Even then it may struggle to provision in a timely way.

Do get in touch if you think you can help improve it.

Tech Used

  • Vagrant (Virtualbox)
  • ShutIt
  • Ansible

I am interested in porting to libvirt also. Please get in touch if you want to help.


One of the big problems with running OpenShift in production is the complexity of each environment. You can have test, UAT and prod environments, but sometimes you want to quickly spin up a realistic environment for development or

At that point you’re usually offered an ‘all-in-one’ or single-command setup, which, while very convenient, doesn’t represent the reality of the system you’re running elsewhere.

This is less didactic than the Kubernetes post (the steps to set up take a good while to run even if you’re using ansible…) but still has its uses.

Because this is in vagrant and is automated, it gives you a reliable, fast, and realistic representation of a real live infrastructure. This comes in very handy if you’re trying to determine the memory usage of etcd, the effect of tuning some config variables, or failover scenarios.


Here are some of the things I had to overcome to make this work. They’re fairly instructive:

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Learn Kubernetes the Hard Way (the Easy and Cheap Way)

Learn Kubernetes the Hard Way (the Easy and Cheap Way)


Building on Kelsey Hightower’s fantastic work exposing the internals of Kubernetes by setting up Kubernetes on public cloud providers, I’ve automated all the steps to set up a cluster on your local machine, with a walkthrough mode that takes you through step-by-step. Watch a video here (the interesting stuff happens from about 3 minutes in):

It’s free?

There is no charge as it will run on your host, but you need 2G of memory spare on your host by default.

It helps if you have Virtualbox and Vagrant already installed (works on Mac too!), although the script will try and set this up for you.

How do I run it?

Here’s the commands to run it yourself:

sudo pip install shutit
git clone --recursive https://github.com/ianmiell/shutit-k8s-the-hard-way
cd shutit-k8s-the-hard-way

What’s going on?

Here’s a diagram of the setup.

The host runs Vagrant and Virtualbox. Each box in the host box (the big rectangle) represents a virtual machine. There are workers (which run the pods, controllers, which run the kubernetes cluster) and a client (which has the kubernetes binaries installed on it) and a load balancer (which represents the entry point to the cluster.

Is it safe?

All work (including the Kubernetes client commands) are done within your locally-provisioned VMs, so it should won’t install crazy things to your machine or anything.

How Does it Work?

The script uses ShutIt to automate the steps to bring up the cluster and walk through the build. Contact me for more info: @ianmiell


The code is here:

Help Wanted

I’m sure this can be improved, both in terms of the functionality elucidated once the cluster is up, as well as the descriptions in the notes.

Please help to contribute if you can!

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Docker in the Enterprise

Deck from my [Contain] meetup talk available here

Video here

Posted in Uncategorized | Leave a comment

Terraform and Dynamic Environments


Recently I have been playing with Terraform. It’s a lot of fun.

I had a little project that was perfect for it, but ran into a problem. Most examples of Terraform usage assume that your environments are static. So layouts like this are not uncommon:



All well and good, but in my project I needed to create environments on the fly, and perhaps many in existence at the same time. There was no ‘live’, just potentially hundreds of envs in use at once for a short period of time.

I also needed to keep a record of environments created and destroyed.

I researched and asked around, but couldn’t find any best practice for this, so came up with a pattern that may be useful to others.

Nothing a Shell Script Can’t Handle

In one sentence, this scheme creates a new folder on demand with a unique value which is destroyed when time is up.

The original code is elsewhere and somewhat more complex, so I put together this simple example code to illustrate the flow.

Here’s a video of it in action:


In addition to the standard main and vars files in the module, there are two scripts involved:

  • create_dynamic_environment.sh
  • destroy_dynamic_environment.sh


  • Create a directory with a unique (well, probably) ID
  • Set up the main.tf file
  • Terraform the environment
  • (Git) add, commit and push the new directory

This script can be triggered when a new environment is required.


# Ensure we are in the right folder
pushd $(dirname ${BASH_SOURCE[0]})

# Create a (probably) unique ID by concatenating two random 
# values (RANDOM is a variable inherent to bash), with the day of year 
# as a suffix.
ID="dynamic_environment_${RANDOM}${RANDOM}_$(date +%j)"

# Create the terraform folder.
mkdir -p ${ID}
pushd ${ID}
cat > main.tf << END
module "dynamicenv" {
  source             = "../modules/dynamicenv"
  dynamic_env_id     = "${ID}"

# Terraform ahoy!
terraform get
terraform plan
terraform apply


# Record the creation in git and push. Assumes keys set up.
git add ${ID}
git commit -am "${ID} environment added"
git push


  • After 7 days, retire the environment
  • (Git) remove, commit and push the removal

This script can be run regularly in a cron.

In the ‘real’ aws environment I get the EC2 instance to self-destruct after a few hours, but for belt and braces we destroy the environment and remove it from git.


# We need extended glob capabilities.
shopt -s extglob

# Ensure we are in the right folder
pushd $(dirname ${BASH_SOURCE[0]})

# Default to destroying environments over 7 days old.
# If you want to destroy all of them, pass in '-1' as an argument.

# Get today's 'day of year'
TODAY=$(date +%j)

# Remove leading zeroes from the date.

# Go through all the environment folders, and terraform destroy,
# git remove and remove the folder.
for dir in $(find dynamic_environment_* -type d -maxdepth 0)
        # Remove the folder prefix.

        # Remove any leading zeroes from the day of year.

        # If over 7 days old...
        if [[ $(( ${TODAY} - ${dir_day})) -gt ${DAYS} ]]
                pushd "${dir}"

                # Destroy the environment.
                terraform destroy -force

                # Remove from git.
                git rm -rf "${dir}"
                git commit -am "destroyed ${dir}"
                git push

                # Remove left-over backup files.
                rm -rf "${dir}"

My book Docker in Practice 


Get 39% off with the code: 39miell

Posted in Uncategorized | Leave a comment

Bash to Python Converter


Ever start a bash script, then wish you’d started it in python?

Use this Docker image to convert your script.


I routinely use both bash and python to quickly whip up tools for short and long-term uses.

Generally I start with a bash script because it’s so fast to get going, but as time goes on I add features, and then wish I had started it in python so that I could access all the modules and functionality that’s harder to get to in bash.

I found a bash2py tool, which looked good, but came as a zipped source download (not even in a git repo!).

I created a Docker image to convert it, and have used it a couple of times. With a little bit of effort you can quickly convert your bash script to a python one and move ahead.



I’m going to use an artificially simple but realistic bash script to walk through a conversion process.

Let’s say I’ve written this bash script to count the number of lines in a list of files, but want to expand this to do very tricky things based on the output:

if [ $# -lt 1 ]
  echo "Usage: $0 file ..."
  exit 1

echo "$0 counts the lines of code" 


for f in $*
 l=`wc -l $f | sed 's/^\([0-9]*\).*$/\1/'`
 echo "$f: $l"


Here’s a conversion session:

imiell@Ians-Air:/space/git/work/bin$ docker run -ti imiell/bash2py
Unable to find image 'imiell/bash2py:latest' locally
latest: Pulling from imiell/bash2py
357ea8c3d80b: Already exists 
98b473a7fa6a: Pull complete 
a7f8553161b4: Pull complete 
a1dc4858a149: Pull complete 
752a5d408084: Pull complete 
cf7fa7bc103f: Pull complete 
Digest: sha256:110450838816d2838267c394bcc99ae00c99f8162fa85a1daa012cff11c9c6c2
Status: Downloaded newer image for imiell/bash2py:latest
root@89e57c8c3098:/opt/bash2py-3.5# vi a.sh
root@89e57c8c3098:/opt/bash2py-3.5# ./bash2py a.sh 
root@89e57c8c3098:/opt/bash2py-3.5# python a.sh.py 
Usage: a.sh.py file ...
root@89e57c8c3098:/opt/bash2py-3.5# python a.sh.py afile
a.sh.py counts the lines of code
afile: 16


So that’s nice, I now have a working python script I can continue to build on!



Before you get too excited, unfortunately it’s not magically working out which python modules to import and cleverly converting everything from bash to python. However, what’s convenient about this is that you can adjust the script where you care about it, and build from there.

To work through this example, here is the raw conversion:

#! /usr/bin/env python
from __future__ import print_function

import sys,os

class Bash2Py(object):
  __slots__ = ["val"]
  def __init__(self, value=''):
    self.val = value
  def setValue(self, value=None):
    self.val = value
    return value

def GetVariable(name, local=locals()):
  if name in local:
    return local[name]
  if name in globals():
    return globals()[name]
  return None

def Make(name, local=locals()):
  ret = GetVariable(name, local)
  if ret is None:
    ret = Bash2Py(0)
    globals()[name] = ret
  return ret

def Array(value):
  if isinstance(value, list):
    return value
  if isinstance(value, basestring):
    return value.strip().split(' ')
  return [ value ]

class Expand(object):
  def at():
    if (len(sys.argv) < 2):
      return []
    return  sys.argv[1:]
  def star(in_quotes):
    if (in_quotes):
      if (len(sys.argv) < 2):
        return ""
      return " ".join(sys.argv[1:])
    return Expand.at()

  def hash():
    return  len(sys.argv)-1

if (Expand.hash() < 1 ):
    print("Usage: "+__file__+" file ...")

print(__file__+" counts the lines of code")


for Make("f").val in Expand.star(0):
    Make("l").setValue(os.popen("wc -l "+str(f.val)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n"))
    print(str(f.val)+": "+str(l.val))


The guts of the code is in the for loop at the bottom.

bash2py does some safe conversion and wrapping of the bash script into some methods such as ‘Make’, ‘Array’ et al that we can get rid of with a little work.

By replacing:

  • Bash2Py(0) with 0
  • Make(“f”).val with f
    • and Make(“l”) with l etc
  • f.val with f
    • and l.val with l etc
< l=Bash2Py(0)
< for Make("f").val in Expand.star(0):
< Make("l").setValue(os.popen("wc -l "+str(f.val)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n"))
< print(str(f.val)+": "+str(l.val))
> l=0
> for f in Expand.star(0):
> l = os.popen("wc -l "+str(f)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n")
> print(str(f)+": "+str(l))

I simplify that section.

I can remove the now-unused methods to end up with the simpler:

#! /usr/bin/env python

from __future__ import print_function

import sys,os

class Expand(object):
  def at():
    if (len(sys.argv) < 2):
      return []
    return  sys.argv[1:]
  def star(in_quotes):
    if (in_quotes):
      if (len(sys.argv) < 2):
        return ""
      return " ".join(sys.argv[1:])
    return Expand.at()
  def hash():
    return  len(sys.argv)-1

if (Expand.hash() < 1 ):
    print("Usage: "+__file__+" file ...")

print(__file__+" counts the lines of code")


for f in Expand.star(0):
    l = os.popen("wc -l "+str(f)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n")
    print(str(f)+": "+str(l))

Note I don’t bother with ‘Expand’ yet, but I can pythonify that later if I choose to.

Docker image

Available here.

The Dockerfile is available here.


My book Docker in Practice 


Get 39% off with the code: 39miell

Posted in Uncategorized | 2 Comments