Migrating an OpenShift etcd Cluster

Summary

Following on from my previous post setting up an OpenShift cluster in Vagrant, this post discusses migrating an etcd cluster within a live OpenShift instance to newer servers.

Moving a standalone etcd cluster is relatively straightforward, but when it’s part of an OpenShift cluster — and especially one that’s live and operational — it is a little more involved.

The ordering of actions is important and there are several aspects to consider when planning such a move:

  • Config management preparation
  • Stopping the cluster
  • Creation and distribution of certificates
  • Data migration
  • Update of OpenShift config
  • Update of config management

Here we are using Ansible to provision and maintain the environment.

You can also use Chef to manage your OpenShift cluster.


Code

The code for this is here:

Video

Here’s a video of the upgrade process:

Steps

VM Setup

This section of the code sets up the VMs using Vagrant.

Cluster Setup

The next section sets up the OpenShift cluster. It:

  • sets up ssh access across all the hosts
  • writes the ansible hosts config file
  • triggers the ansible playbook

Take a Backup

Take a backup of etcd on all three nodes, just in case.

Stop the Cluster

Generate New Certs

For each new node, run the commands to generate the certs for the new nodes, and copy to the codes.

Add etcd Nodes One-By-One

Again for each node:

  • add the new node to the etcd cluster
  • go to the node
  • install etcd
  • extract the certificates
  • update the etcd config
  • restart etcd

NOTE: If you have a lot of data in your cluster, you will want to give the new node ample time to receive the data from the other nodes. In this trivial example, there is little data to transfer. Alternatively, you can copy over the data from one of the original nodes.

Drop the Old Members

Now drop the old members from the cluster and remove etcd from those hosts:

Update the Master Config and Bring the OpenShift Cluster Back Up

The /etc/origin/master/master-config.yaml file needs to updated to reflect the new etcd cluster before bringing back the OpenShift cluster.

Update Config Manager and Re-Run

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

A Complete OpenShift Cluster on Vagrant, Step by Step

tl;dr

Following on from my Kubernetes post here, I have automated an OpenShift Origin cluster using the same tools.

Video

Here is a video of the whole process.

It gets (relatively) interesting later on, as a lot of the process is Vagrant starting up and yum installs failing on bad mirrors. Also, Ansible needs to be run several times for it to work (I suspect due to resource limitations, see Gotchas below).

Architecture

Here is a layout of the VMs. The host uses the landrush plugin to allow transparent DNS lookup from the host, and between boxes.

OpenShift Vagrant Cluster VM Layout

Code

The code is here:

Run Yourself

You will need at least 6.5G spare memory (maybe more) on your host. Even then it may struggle to provision in a timely way.

Do get in touch if you think you can help improve it.

Tech Used

  • Vagrant (Virtualbox)
  • ShutIt
  • Ansible

I am interested in porting to libvirt also. Please get in touch if you want to help.

Why?

One of the big problems with running OpenShift in production is the complexity of each environment. You can have test, UAT and prod environments, but sometimes you want to quickly spin up a realistic environment for development or

At that point you’re usually offered an ‘all-in-one’ or single-command setup, which, while very convenient, doesn’t represent the reality of the system you’re running elsewhere.

This is less didactic than the Kubernetes post (the steps to set up take a good while to run even if you’re using ansible…) but still has its uses.

Because this is in vagrant and is automated, it gives you a reliable, fast, and realistic representation of a real live infrastructure. This comes in very handy if you’re trying to determine the memory usage of etcd, the effect of tuning some config variables, or failover scenarios.

Gotchas

Here are some of the things I had to overcome to make this work. They’re fairly instructive:

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Learn Kubernetes the Hard Way (the Easy and Cheap Way)

Learn Kubernetes the Hard Way (the Easy and Cheap Way)

tl;dr

Building on Kelsey Hightower’s fantastic work exposing the internals of Kubernetes by setting up Kubernetes on public cloud providers, I’ve automated all the steps to set up a cluster on your local machine, with a walkthrough mode that takes you through step-by-step. Watch a video here (the interesting stuff happens from about 3 minutes in):

It’s free?

There is no charge as it will run on your host, but you need 2G of memory spare on your host by default.

It helps if you have Virtualbox and Vagrant already installed (works on Mac too!), although the script will try and set this up for you.

How do I run it?

Here’s the commands to run it yourself:

sudo pip install shutit
git clone --recursive https://github.com/ianmiell/shutit-k8s-the-hard-way
cd shutit-k8s-the-hard-way
./walkthrough.sh

What’s going on?

Here’s a diagram of the setup.

The host runs Vagrant and Virtualbox. Each box in the host box (the big rectangle) represents a virtual machine. There are workers (which run the pods, controllers, which run the kubernetes cluster) and a client (which has the kubernetes binaries installed on it) and a load balancer (which represents the entry point to the cluster.

Is it safe?

All work (including the Kubernetes client commands) are done within your locally-provisioned VMs, so it should won’t install crazy things to your machine or anything.

How Does it Work?

The script uses ShutIt to automate the steps to bring up the cluster and walk through the build. Contact me for more info: @ianmiell

Code

The code is here:

Help Wanted

I’m sure this can be improved, both in terms of the functionality elucidated once the cluster is up, as well as the descriptions in the notes.

Please help to contribute if you can!

Learn More

My book Docker in Practice:

Get 39% off with the code: 39miell

Posted in Uncategorized | 1 Comment

Docker in the Enterprise

Deck from my [Contain] meetup talk available here

Video here

Posted in Uncategorized | Leave a comment

Terraform and Dynamic Environments

Introduction

Recently I have been playing with Terraform. It’s a lot of fun.

I had a little project that was perfect for it, but ran into a problem. Most examples of Terraform usage assume that your environments are static. So layouts like this are not uncommon:

terraform_folder/
    modules/
        myproject/main.tf
        myproject/vars.tf
    live/
        main.tf
    stage/
        main.tf
    dev/
        main.tf

Problem

All well and good, but in my project I needed to create environments on the fly, and perhaps many in existence at the same time. There was no ‘live’, just potentially hundreds of envs in use at once for a short period of time.

I also needed to keep a record of environments created and destroyed.

I researched and asked around, but couldn’t find any best practice for this, so came up with a pattern that may be useful to others.

Nothing a Shell Script Can’t Handle

In one sentence, this scheme creates a new folder on demand with a unique value which is destroyed when time is up.

The original code is elsewhere and somewhat more complex, so I put together this simple example code to illustrate the flow.

Here’s a video of it in action:

 

In addition to the standard main and vars files in the module, there are two scripts involved:

  • create_dynamic_environment.sh
  • destroy_dynamic_environment.sh

create_dynamic_environment.sh

  • Create a directory with a unique (well, probably) ID
  • Set up the main.tf file
  • Terraform the environment
  • (Git) add, commit and push the new directory

This script can be triggered when a new environment is required.

#!/bin/bash                                                                                                                                                                                                                                                                                                                                                                                                 

# Ensure we are in the right folder
pushd $(dirname ${BASH_SOURCE[0]})

# Create a (probably) unique ID by concatenating two random 
# values (RANDOM is a variable inherent to bash), with the day of year 
# as a suffix.
ID="dynamic_environment_${RANDOM}${RANDOM}_$(date +%j)"

# Create the terraform folder.
mkdir -p ${ID}
pushd ${ID}
cat > main.tf << END
module "dynamicenv" {
  source             = "../modules/dynamicenv"
  dynamic_env_id     = "${ID}"
}
END

# Terraform ahoy!
terraform get
terraform plan
terraform apply

popd

# Record the creation in git and push. Assumes keys set up.
git add ${ID}
git commit -am "${ID} environment added"
git push
popd         

destroy_dynamic_environments.sh

  • After 7 days, retire the environment
  • (Git) remove, commit and push the removal

This script can be run regularly in a cron.

In the ‘real’ aws environment I get the EC2 instance to self-destruct after a few hours, but for belt and braces we destroy the environment and remove it from git.

#!/bin/bash

# We need extended glob capabilities.
shopt -s extglob

# Ensure we are in the right folder
pushd $(dirname ${BASH_SOURCE[0]})

# Default to destroying environments over 7 days old.
# If you want to destroy all of them, pass in '-1' as an argument.
DAYS=${1:-7}

# Get today's 'day of year'
TODAY=$(date +%j)

# Remove leading zeroes from the date.
TODAY=${TODAY##+(0)}

# Go through all the environment folders, and terraform destroy,
# git remove and remove the folder.
for dir in $(find dynamic_environment_* -type d -maxdepth 0)
do
        # Remove the folder prefix.
        dir_day=${dir##*_}

        # Remove any leading zeroes from the day of year.
        dir_day=${dir_day##+(0)}

        # If over 7 days old...
        if [[ $(( ${TODAY} - ${dir_day})) -gt ${DAYS} ]]
        then
                pushd "${dir}"

                # Destroy the environment.
                terraform destroy -force
                popd

                # Remove from git.
                git rm -rf "${dir}"
                git commit -am "destroyed ${dir}"
                git push

                # Remove left-over backup files.
                rm -rf "${dir}"
        fi
done

My book Docker in Practice 

DIP

Get 39% off with the code: 39miell

Posted in Uncategorized | Leave a comment

Bash to Python Converter

tl;dr

Ever start a bash script, then wish you’d started it in python?

Use this Docker image to convert your script.

Introduction

I routinely use both bash and python to quickly whip up tools for short and long-term uses.

Generally I start with a bash script because it’s so fast to get going, but as time goes on I add features, and then wish I had started it in python so that I could access all the modules and functionality that’s harder to get to in bash.

I found a bash2py tool, which looked good, but came as a zipped source download (not even in a git repo!).

I created a Docker image to convert it, and have used it a couple of times. With a little bit of effort you can quickly convert your bash script to a python one and move ahead.

 

Example

I’m going to use an artificially simple but realistic bash script to walk through a conversion process.

Let’s say I’ve written this bash script to count the number of lines in a list of files, but want to expand this to do very tricky things based on the output:

#!/bin/bash
if [ $# -lt 1 ]
then
  echo "Usage: $0 file ..."
  exit 1
fi

echo "$0 counts the lines of code" 

l=0

for f in $*
do
 l=`wc -l $f | sed 's/^\([0-9]*\).*$/\1/'`
 echo "$f: $l"
done

 

Here’s a conversion session:

imiell@Ians-Air:/space/git/work/bin$ docker run -ti imiell/bash2py
Unable to find image 'imiell/bash2py:latest' locally
latest: Pulling from imiell/bash2py
357ea8c3d80b: Already exists 
98b473a7fa6a: Pull complete 
a7f8553161b4: Pull complete 
a1dc4858a149: Pull complete 
752a5d408084: Pull complete 
cf7fa7bc103f: Pull complete 
Digest: sha256:110450838816d2838267c394bcc99ae00c99f8162fa85a1daa012cff11c9c6c2
Status: Downloaded newer image for imiell/bash2py:latest
root@89e57c8c3098:/opt/bash2py-3.5# vi a.sh
root@89e57c8c3098:/opt/bash2py-3.5# ./bash2py a.sh 
root@89e57c8c3098:/opt/bash2py-3.5# python a.sh.py 
Usage: a.sh.py file ...
root@89e57c8c3098:/opt/bash2py-3.5# python a.sh.py afile
a.sh.py counts the lines of code
afile: 16

 

So that’s nice, I now have a working python script I can continue to build on!

Simplify

 

Before you get too excited, unfortunately it’s not magically working out which python modules to import and cleverly converting everything from bash to python. However, what’s convenient about this is that you can adjust the script where you care about it, and build from there.

To work through this example, here is the raw conversion:

#! /usr/bin/env python
from __future__ import print_function

import sys,os

class Bash2Py(object):
  __slots__ = ["val"]
  def __init__(self, value=''):
    self.val = value
  def setValue(self, value=None):
    self.val = value
    return value

def GetVariable(name, local=locals()):
  if name in local:
    return local[name]
  if name in globals():
    return globals()[name]
  return None

def Make(name, local=locals()):
  ret = GetVariable(name, local)
  if ret is None:
    ret = Bash2Py(0)
    globals()[name] = ret
  return ret

def Array(value):
  if isinstance(value, list):
    return value
  if isinstance(value, basestring):
    return value.strip().split(' ')
  return [ value ]

class Expand(object):
  @staticmethod
  def at():
    if (len(sys.argv) < 2):
      return []
    return  sys.argv[1:]
  @staticmethod
  def star(in_quotes):
    if (in_quotes):
      if (len(sys.argv) < 2):
        return ""
      return " ".join(sys.argv[1:])
    return Expand.at()
  @staticmethod

  def hash():
    return  len(sys.argv)-1

if (Expand.hash() < 1 ):
    print("Usage: "+__file__+" file ...")
    exit(1)

print(__file__+" counts the lines of code")

l=Bash2Py(0)

for Make("f").val in Expand.star(0):
    Make("l").setValue(os.popen("wc -l "+str(f.val)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n"))
    print(str(f.val)+": "+str(l.val))

 

The guts of the code is in the for loop at the bottom.

bash2py does some safe conversion and wrapping of the bash script into some methods such as ‘Make’, ‘Array’ et al that we can get rid of with a little work.

By replacing:

  • Bash2Py(0) with 0
  • Make(“f”).val with f
    • and Make(“l”) with l etc
  • f.val with f
    • and l.val with l etc
54,57c27,30
< l=Bash2Py(0)
< for Make("f").val in Expand.star(0):
< Make("l").setValue(os.popen("wc -l "+str(f.val)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n"))
< print(str(f.val)+": "+str(l.val))
---
> l=0
> for f in Expand.star(0):
> l = os.popen("wc -l "+str(f)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n")
> print(str(f)+": "+str(l))

I simplify that section.

I can remove the now-unused methods to end up with the simpler:

#! /usr/bin/env python

from __future__ import print_function

import sys,os

class Expand(object):
  @staticmethod
  def at():
    if (len(sys.argv) < 2):
      return []
    return  sys.argv[1:]
  @staticmethod
  def star(in_quotes):
    if (in_quotes):
      if (len(sys.argv) < 2):
        return ""
      return " ".join(sys.argv[1:])
    return Expand.at()
  @staticmethod
  def hash():
    return  len(sys.argv)-1

if (Expand.hash() < 1 ):
    print("Usage: "+__file__+" file ...")
    exit(1)

print(__file__+" counts the lines of code")

l=0

for f in Expand.star(0):
    l = os.popen("wc -l "+str(f)+" | sed \"s/^\\([0-9]*\\).*$/\\1/\"").read().rstrip("\n")
    print(str(f)+": "+str(l))

Note I don’t bother with ‘Expand’ yet, but I can pythonify that later if I choose to.

Docker image

Available here.

The Dockerfile is available here.

 

My book Docker in Practice 

DIP

Get 39% off with the code: 39miell

Posted in Uncategorized | 2 Comments

Hello world Unikernel Walkthrough

 

Introduction

Unikernels are a relatively new concept to most people in IT, but have been around for a while.

They are operating system running as VMs under a hypervisor, but are:

  • Single-purpose
  • Only use the libraries they need
    • A unikernel might not have networking (for example)
  • Built from a set of available libraries which are dynamically pulled into the image as needed

So rather than starting from a ‘complete’ OS like Linux and then stripping out what’s not needed, only what’s needed to run the OS is included.

This brings some benefits:

  • Smaller OS image size
  • Smaller security attack surface
  • Fast bootup
  • Small footprint
  • True isolation from other OSes on the same host

Docker recently bought a unikernel company and promptly used their technology to deliver a very impressive Beta for Mac using xhyve. The end result was a much improved user experience delivered surprisingly quickly.

 

Walkthrough

This walkthrough uses one flavour of unikernel (MirageOS) to demonstrate the building of a unikernel as a Unix binary and as a xen VM image.

The unikernel uses the console library to print out ‘hello world’ four times and exit.

It sets up an Ubuntu xenial VM and compiles the binary and VM image. The VM image is run using the xl tool, which runs up the VM as though it were a VM running under Xen.

The code is here.

 

Video

Here is a video of the code running on my home server:

 

 

My book Docker in Practice 

DIP

Get 39% off with the code: 39miell

Posted in Uncategorized | Leave a comment