Clustered VM Testing How-To

Recently I’ve been testing clusters of VMs running on my local host.

I thought that there must be a standard way to test multi-node VM setups, but asking around at work, and on github yielded no answers.

So I came up with my own solution, which I outline here.


A ShutItFile is a superset of a Dockerfile that allows straighforward automation of automation tasks.

Here’s an example of a ShutItFile that manipulates two VMs, and tests network connectivity between them.

It creates two machines (machine1 and machine2) and logs into them in turn using the ‘VAGRANT_LOGIN’ directive. On each machine it installs python, sets up a simple python http server which serves the text: ‘Hi from machine1’ (from machine1) or ‘Hi from machine2’ from machine2.

It then tests that the output matches expectation from both machines using the ‘ASSERT_OUTPUT’ directive.

To demonstrate the ‘testing’ nature of the ShutItFile, a ‘PAUSE_POINT’ directive is included, which drops you into the run with a terminal, and a deliberately wrong ‘ASSERT_OUTPUT’ directive is included to show what happens when a test fails (and the terminal is interactive). This makes debugging a _lot_ easier.


# Set up trivial webserver on machine1
INSTALL python
# Add file 
RUN echo 'hi from machine1' > /root/index.html
RUN nohup python -m SimpleHTTPServer 80 &

# Set up trivial webserver on machine2
INSTALL python
RUN echo 'hi from machine2' > /root/index.html
RUN nohup python -m SimpleHTTPServer 80 &

# Test machine2 from machine1
INSTALL python
RUN curl machine2
ASSERT_OUTPUT hi from machine2

# Test machine1 from machine2
INSTALL python
RUN curl machine1
ASSERT_OUTPUT hi from machine1

# Example debug
INSTALL python
PAUSE_POINT 'Have a look around, debug away'
# Trigger a 'failure'
RUN curl machine2
ASSERT_OUTPUT will never happen

To run this ShutItFile (which we call here ‘ShutItFile.sf’), you run like this:

# Install shutit
pip install shutit
shutit skeleton --shutitfile ShutItFile.sf \
    --name /tmp/shutitfile_build \
    --domain twovm.twovm \
    --delivery bash\
    --pattern vagrant \
    --vagrant_num_machines 2\
    --vagrant_machine_prefix machine

This code for this example is available here.


There’s a video of the above run here:

Create Your Own

If you want to create your own multinode test:

pip install shutit  #use sudo if needed, --upgrade if upgrading
shutit skeleton

Follow the instructions, choosing ‘shutitfile’ as the pattern, and ‘vagrant’ as the delivery method, eg:

$  shutit skeleton

# Input a name for this module.
# Default: /space/git/shutitfile/examples/vagrant/simple_two_machine/shutit_sabers

# Input a ShutIt pattern.
Default: bash

bash:              a shell script
docker:            a docker image build
vagrant:           a vagrant setup
docker_tutorial:   a docker-based tutorial
shutitfile:        a shutitfile-based project (can be docker, bash, vagrant)


# Input a delivery method from: bash, docker, vagrant.
# Default: ' + default_delivery + '

docker:      build within a docker image
bash:        run commands directly within bash
vagrant:     build an n-node vagrant cluster

# ShutIt Started... 
# Loading configs...
# Run:
cd /space/git/shutitfile/examples/vagrant/simple_two_machine/shutit_sabers && ./
# to run.
# Or
# cd /space/git/shutitfile/examples/vagrant/simple_two_machine/shutit_sabers && ./ -c
# to run while choosing modules to build.

and follow the commands given (at the place in bold above) to run.

Initially you are given empty ShutItFiles. You could start by adding the commands from the example here.

A cheatsheet for the various ShutItFile commands is available here.

Watch me do this here.

Real-world Usage

As an example of real-world usage, this technique is being used to regression test Chef recipes used to provision OpenShift.

The Chef scripts are here, and the regression tests are here.




Posted in Uncategorized | 3 Comments

Easy Shell Automation

Regular readers will be familiar with ShutIt, a framework I work on that allows me to automate all sorts of workflows and tools that I publish on GitHub.

This article demonstrates a new feature that uses this platform to make doing expect-type tasks trivial.

Embedded ShutIt

In response to a request, I recently added a feature which may be useful to others.

All this is available in python scripts if you:

pip install shutit

You can now automate interactions in python scripts. This script just gets the hostname and logs it:

import shutit_standalone
import logging
shutit_obj = shutit_standalone.create_bash_session()
hostname_str = shutit_obj.send_and_get_output('hostname')
shutit_obj.log('Hostname is: ' + hostname_str, 

Since ShutIt is a big wrapper/platform built onpexpect, it takes care of setting up the prompt, figuring out when the command is done and a whole load of other stuff you never want to worry about about terminals.

Log Into Server Example

This example logs into a server, taking the password from user input, and ensures git is installed on it before logging out:

import shutit_standalone
import logging
shutit_obj = shutit_standalone.create_bash_session()
username = shutit_obj.get_input('Input username: ')
server = shutit_obj.get_input('Input server: ', ispass=True)
password = shutit_obj.get_input('Input password', ispass=True)
shutit_obj.login('ssh ' + username + '@' + server,

ShutIt takes care of determining what package manager is on the host. If you’re not logged in as root it prompts you for a sudo password before attempting the install.

Pause Mid-Flight to Look Around

If you want to insert yourself in the middle of the run, you can add a ‘pause_point’, which will hand you back the terminal until you hit CTRL+[, after which it continues:

import shutit_standalone
import logging
username = shutit_obj.get_input('Input username: ')
server = shutit_obj.get_input('Input server: ', ispass=True)
password = shutit_obj.get_input('Input password', ispass=True)
shutit_obj.login('ssh ' + username + '@' + server,
shutit.obj.pause_point('Take a look around!')

Send Commands Until Specific Output Seen

If you need to wait for something to happen, you can ‘send_until’ a regexp is seen in the output. This trivial example runs a command to wait 20 seconds and then create a file, and the ‘send_until’ command does not complete until the file is created.

import shutit_standalone
import logging
username = shutit_obj.get_input('Input username: ')
server = shutit_obj.get_input('Input server: ', ispass=True)
password = shutit_obj.get_input('Input password', ispass=True)
shutit_obj.login('ssh ' + username + '@' + server,
shutit_obj.send('rm -f newfile && sleep 20 && touch newfile &')
shutit.obj.send_until('ls newfile | wc -l','1')


This can do a lot more, but I just want to give a flavour here.

I challenge you to give me a real-world automation task I can’t automate!


My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | Leave a comment

1-Minute Multi-Node VM Setup


Quickly spin up multiple VMs with useful DNSs on your local machine and automate complex environments easily.

Here’s a video:


Maintaining Docker at scale, I’m more frequently concerned with clusters of VMs than the containers themselves.

The irony of this is not lost on me.

Frequently I need to spin up clusters of machines. Either this is very slow/unreliable (Enterprise OpenStack implementation) or expensive (Amazon).

The obvious answer to this is to use Vagrant, but managing this can be challenging.

So I present here a very easy way to set up a useful Vagrant cluster. With this framework, you can then automate your ‘real’ environment and play to your heart’s content.

$ pip install shutit
$ shutit skeleton
# Input a name for this module.
# Default: /Users/imiell/shutit_resins
[hit return to take default]
# Input a ShutIt pattern.
Default: bash
bash: a shell script
docker: a docker image build
vagrant: a vagrant setup
docker_tutorial: a docker-based tutorial
shutitfile: a shutitfile-based project
[type in vagrant]
How many machines do you want (default: 3)? 3
[hit return to take default]
What do you want to call the machines (eg superserver) (default: machine)?
[hit return to take default]
Do you want to have open ssh access between machines? (default: yes) yes
Initialized empty Git repository in /Users/imiell/shutit_resins/.git/
Cloning into ‘shutit-library’...
remote: Counting objects: 1322, done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 1322 (delta 20), reused 0 (delta 0), pack-reused 1289
Receiving objects: 100% (1322/1322), 1.12 MiB | 807.00 KiB/s, done.
Resolving deltas: 100% (658/658), done.
Checking connectivity… done.
# Run:
cd /Users/imiell/shutit_resins && ./
to run.
[follow the instructions to run up your cluster.
$ cd /Users/imiell/shutit_resins && ./

This will automatically run up an n-node cluster and then finish up.

NOTE: Make sure you have enough resources on your machine to run this!

BTW, if you re-run the it automatically clears up previous VMs spun up by the script to prevent your machine grinding to a halt with old machines.

Going deeper

What you can do from there is automate the setup of these nodes to your needs.

For example:

def build(self, shutit):
[... go to end of this function ...]
# Install apache
    shutit.login(command='vagrant ssh machine1')
    shutit.login(command='sudo su - ')
# Go to machine2 and call machine1's server
    shutit.login(command='vagrant ssh machine2')
    shutit.login(command='sudo su -')
    shutit.send('curl machine1.vagrant.test')

Will set up an apache server and curl a request to the first machine from the second.


This is obviously a simple example. I’ve used this for these more complex setups which are can be instructive and useful:

Chef server and client

Creates a chef server and client.

Docker Swarm

Creates a 3-node docker swarm

OpenShift Cluster

This one sets up a full OpenShift cluster, setting it up using the standard ansible scripts.

Automation of an etcd migration on OpenShift

This branch of the above code sets up OpenShift using the alternative Chef scripts, and migrates an etcd cluster from one set of nodes to another.

Docker Notary

Setting up of a Docker notary sandbox.

Help Wanted

If you have a need for an environment, or can improve the setup of any of the above please let me know: @ianmiell

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Migrating an OpenShift etcd Cluster


Following on from my previous post setting up an OpenShift cluster in Vagrant, this post discusses migrating an etcd cluster within a live OpenShift instance to newer servers.

Moving a standalone etcd cluster is relatively straightforward, but when it’s part of an OpenShift cluster — and especially one that’s live and operational — it is a little more involved.

The ordering of actions is important and there are several aspects to consider when planning such a move:

  • Config management preparation
  • Stopping the cluster
  • Creation and distribution of certificates
  • Data migration
  • Update of OpenShift config
  • Update of config management

Here we are using Ansible to provision and maintain the environment.

You can also use Chef to manage your OpenShift cluster.


The code for this is here:


Here’s a video of the upgrade process:


VM Setup

This section of the code sets up the VMs using Vagrant.

Cluster Setup

The next section sets up the OpenShift cluster. It:

  • sets up ssh access across all the hosts
  • writes the ansible hosts config file
  • triggers the ansible playbook

Take a Backup

Take a backup of etcd on all three nodes, just in case.

Stop the Cluster

Generate New Certs

For each new node, run the commands to generate the certs for the new nodes, and copy to the codes.

Add etcd Nodes One-By-One

Again for each node:

  • add the new node to the etcd cluster
  • go to the node
  • install etcd
  • extract the certificates
  • update the etcd config
  • restart etcd

NOTE: If you have a lot of data in your cluster, you will want to give the new node ample time to receive the data from the other nodes. In this trivial example, there is little data to transfer. Alternatively, you can copy over the data from one of the original nodes.

Drop the Old Members

Now drop the old members from the cluster and remove etcd from those hosts:

Update the Master Config and Bring the OpenShift Cluster Back Up

The /etc/origin/master/master-config.yaml file needs to updated to reflect the new etcd cluster before bringing back the OpenShift cluster.

Update Config Manager and Re-Run

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

A Complete OpenShift Cluster on Vagrant, Step by Step


Following on from my Kubernetes post here, I have automated an OpenShift Origin cluster using the same tools.


Here is a video of the whole process.

It gets (relatively) interesting later on, as a lot of the process is Vagrant starting up and yum installs failing on bad mirrors. Also, Ansible needs to be run several times for it to work (I suspect due to resource limitations, see Gotchas below).


Here is a layout of the VMs. The host uses the landrush plugin to allow transparent DNS lookup from the host, and between boxes.

OpenShift Vagrant Cluster VM Layout


The code is here:

Run Yourself

You will need at least 6.5G spare memory (maybe more) on your host. Even then it may struggle to provision in a timely way.

Do get in touch if you think you can help improve it.

Tech Used

  • Vagrant (Virtualbox)
  • ShutIt
  • Ansible

I am interested in porting to libvirt also. Please get in touch if you want to help.


One of the big problems with running OpenShift in production is the complexity of each environment. You can have test, UAT and prod environments, but sometimes you want to quickly spin up a realistic environment for development or

At that point you’re usually offered an ‘all-in-one’ or single-command setup, which, while very convenient, doesn’t represent the reality of the system you’re running elsewhere.

This is less didactic than the Kubernetes post (the steps to set up take a good while to run even if you’re using ansible…) but still has its uses.

Because this is in vagrant and is automated, it gives you a reliable, fast, and realistic representation of a real live infrastructure. This comes in very handy if you’re trying to determine the memory usage of etcd, the effect of tuning some config variables, or failover scenarios.


Here are some of the things I had to overcome to make this work. They’re fairly instructive:

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Learn Kubernetes the Hard Way (the Easy and Cheap Way)

Learn Kubernetes the Hard Way (the Easy and Cheap Way)


Building on Kelsey Hightower’s fantastic work exposing the internals of Kubernetes by setting up Kubernetes on public cloud providers, I’ve automated all the steps to set up a cluster on your local machine, with a walkthrough mode that takes you through step-by-step. Watch a video here (the interesting stuff happens from about 3 minutes in):

It’s free?

There is no charge as it will run on your host, but you need 2G of memory spare on your host by default.

It helps if you have Virtualbox and Vagrant already installed (works on Mac too!), although the script will try and set this up for you.

How do I run it?

Here’s the commands to run it yourself:

sudo pip install shutit
git clone --recursive
cd shutit-k8s-the-hard-way

What’s going on?

Here’s a diagram of the setup.

The host runs Vagrant and Virtualbox. Each box in the host box (the big rectangle) represents a virtual machine. There are workers (which run the pods, controllers, which run the kubernetes cluster) and a client (which has the kubernetes binaries installed on it) and a load balancer (which represents the entry point to the cluster.

Is it safe?

All work (including the Kubernetes client commands) are done within your locally-provisioned VMs, so it should won’t install crazy things to your machine or anything.

How Does it Work?

The script uses ShutIt to automate the steps to bring up the cluster and walk through the build. Contact me for more info: @ianmiell


The code is here:

Help Wanted

I’m sure this can be improved, both in terms of the functionality elucidated once the cluster is up, as well as the descriptions in the notes.

Please help to contribute if you can!

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Docker in the Enterprise

Deck from my [Contain] meetup talk available here

Video here

Posted in Uncategorized | Leave a comment