Blue green Deployments using Containers

This posts reproduces the demo that run at November Docket Meet-up in Sydney. It emulates a blue green deployment using Docker containers. Every independent infrastructure contains a elastic load balancer and an ec2 instance. Every ec2 instance has a microservice running, one with Tomcat 7 (emulating the existing infrastructure) and the second one with Tomcat 8 (emulating the new infrastructure). We will deploy both infrastructures using the framework Manageacloud.

This post is a summary of commands, if you want to learn more about how it works you can get started using the quickstart guide.

Installation Summary

pip install awscli
pip install mac
aws configure
mac login
mac provider credential amazon <access key> <secret_access_key>

Tomcat 7

mac -s infrastructure macfile -p TOMCAT_VERSION=7-jre8

Tomcat 8

mac -s infrastructure macfile -p TOMCAT_VERSION=8-jre8

Accessing to the Infrastructure

You can access to the application that is running the Microservice by using the ec2 public address or the DNSName provided by the load balancer. You can access the DNSName by executing:

For Tomcat7:

mac resource get_stdout docker_tomcat 7-jre8 load_balancer_01

For Tomcat8:

mac resource get_stdout docker_tomcat 8-jre8 load_balancer_01


Translating the AWS Console to the Command Line

AWS provides a clean and efficient method for using the available services in a browser interface. This is fantastic when completing administration tasks by hand, but doesn't allow for any form of automation or scripting. The Command Line suite is made available for this purpose, but without the graphical interface it can be confusing to follow and difficult to translate. In this article, we look at taking some common actions from the graphical interface and applying them to the command, and then how Manageacloud can help with ensuring these commands produce expected results.

Amazon Web Services provides a grand number of services that provide a vast array of functionality many websites require to function, with a smooth and usable browser-based interface. This interface is clean, but doesn't provide the automation that would be required to deliver these websites at any form of scale. A Command Line utility is available for this purpose, and installable using pip within a standard python environment. The command installed is simply aws and will require some configuration, just run aws configure to set this up. You will need your AWS details for the command line access, available through the Identity & Access Management section of the console.

Automation has become a prominent topic in recent years, especially with the advent of cheaper hardware and volatile infrastructure. Having to maintain large sets of infrastructure that isn't completely known would be impossible without tooling to provide meta storage and association. Using a command line tool has been standard practice, though browser interfaces still provide a source of truth feeling to display configuration and general overview. The AWS Console is fantastic for this, displaying any deployed infrastructure within locations, breaking down types of infrastructure and allowing for the display of any meta detail associated.

To begin with, quite possibly the most common service used would be EC2. This is essentially a service providing virtual machines, an operating system and hardware profile being all that's needed to create. Of course, this basic VM wouldn't actually be doing anything once it was created. Within the console, it is trivial to create this. Just visit the EC2 link within the Compute section, then click Launch Instance and follow the prompts. Once automation is being taken into account however, this process can be further simplified and scripted using the command line, and doesn't require the several steps of button pushing to accomplish. A quick example of a command to create an instance is as follows.

aws ec2 run-instances --image-id ami-69631053 --count 1 --instance-type t2.micro --security-groups default --region ap-southeast-2

Using the aws command directly, the service is specified followed by the action or subcommand. In this case, we're working with EC2 and we wish to Run an Instance. More options are added, including the AMI and Security group, along with the Instance Type and Region. By default, the region is also set via the aws configure command that was set at the beginning, but can be overruled on a command basis.

Once this command has run, detail will be displayed about the instance, including the instance ID. This ID can then be used in further commands, such as adding a load balancer or destroying it. Within the Manageacloud environment, this ID is kept as a part of the loaded Infrastructure and allows for scripting capabilities and associations without any manual interaction.

When the instance is ready to be removed, it can be stopped with the following command.

aws ec2 stop-instances --instance-ids [instance-id] --region ap-southeast-2

In this command, [instance-id] is the returned identifier from the first command. The subcommand used here is stop-instances, which simply powers down the instance specifically. A similar command is terminate-instances that will accomplish the same task, but also destroy the instance once powered down. This can be extremely useful for dynamic infrastructrue deployments, but for any instance that will be useful in the future unnecessary. One good use-case for stop would be development environments. When working on a project, an instance can be started, worked on, and then stopped ready for the next time.

These are just 3 of the EC2 subcommands available, with many more for a wide variety of uses. To see a list of the available commands, just run aws ec2 help and scroll to Available Commands. For a list of the commands available within the aws application, run aws help and scroll to Available Services. Each service available within the console will have a counterpart within the command line interface. Each of these services use their abbreviated form to help with command condensation. Simple Storage Service is S3, Elastic Load Balancer is ELB, and so on. For each service, to see subcommands and other options, just run aws [service] help and scroll to the Available Commands header.

Furthering the topic of automation, Manageacloud makes re-use of commands and their scripting extremely simple. Check out the Quick start guide to see how infrastructure and instances can be defined and built, then automated for full flexibility and control. 

Zero Downtime using Blue Green Deployments

This article explains briefly some concepts required to achieve zero downtime using blue green deployments.

Blue Green Deployments in a Nutshell

Let's assume that you have an application in AWS. This application has the following architecture: The traffic makes DNS requests to, then it access to the load balancer (elastic load balancer for AWS). This load balancer is configured with an autoscale group, which creates new ec2 instances that connects to the database ( RDS for Amazon Web Services). The current application is the version 1.0.

The following steps must be completed to deploy the version 2.0 using blue green deployments:

1 - Create a Brand New Production Ready Infrastructure

The new infrastructure won't be active yet. This process should be completely automated and it should take just a few minutes. The new infrastructure connects to the existing database.

2 - Test the new infrastructure

Make sure that it is working properly. You can automate this step, which will be discussed in another post. If the new infrastructure does not work you can destroy it, fix the problem, and create another system from scratch. All this process is completely transparent to your users.

3 - Activate the new infrastructure

Depending on how things are organised you can, for example, activate it by replacing the servers in the load balancer or by updating a DNS entry. This example uses DNS, but you should evaluate what option is best for you.

4 - Destroy the old infrastructure

Once there is no more traffic active in the old infrastructure, there are no more tasks pending (e.g the workers has finished the queues) and we are confident that the new infrastructure is working fine, the old infrastructure is destroyed.

Blue Green Deployments are easier when the system is designed using Immutable Infrastructure Architecture Pattern.


Immutable Architecture Pattern in a Nutshell

1 - The immutable infrastructures pattern divides the infrastructure into two areas: data and everything else. The "everything else" components are replaced at every deployment, rather than being updated in-place.

In our previous example we had several infrastructure elements in place. EC2 and the load balancer does not have state, which means that are the same in time. RDS, the database, has state as it is storing the application information.

2 - You should never change any part of the production once it is deployed. If you need a new change, deploy a new system.

3 - It is best to automate everything at the lowest level possible.

Implement blue green deployments within days

Manageacloud can help you to automate your application within days. Contact us for more information.

Azure: Create and Provision Windows Servers Automatically

The goal of this article is to explain how to create and configure a Windows server automatically using Azure and the cross platform command line interface (xplat-cli).

Getting Started with Windows Azure xplat-cli

1) Install the Windows Azure command line interface

npm install -g azure-cli

Do you need a Linux Terminal ?

Manageacloud provides free Linux terminals ready to use. We also have terminals with Azure xplat-cli pre-installed available.

Debian and nodejs

Unfortunately not all nodejs applications works on Debian based distributions out of the box. If you have problems with azure xplat-cli, try the following command:

ln -s /usr/bin/nodejs /usr/bin/nod

2) Activate the Service Manager Mode

azure config mode asm

3) Log into your Azure account

azure login

4) Create the network for the Windows VM

azure network vnet create --location "East US" testnet

5) Create the Windows VM

azure vm create --vm-name macdemotest --location "East US" --virtual-network-name testnet --rdp 3389  macdemotest   ad072bd3082149369c449ba5832401ae__Windows-Server-Remote-Desktop-Session-Host-on-Windows-Server-2012-R2-20150828-0350 username MySafePassword01!

Accessing to the Windows Server.

You can confirm that the server is ready to use by executing the following command:

azure vm list

Once the server is ready to use, you can access to the desktop using RDP. For Linux users, remmina is recommended.

6) Provision the Windows VM

As a proof of concept for the provisioning of the Windows server, we are going to create the folder C:\HelloWorld. The source code of the script is hosted in github.

azure vm extension set macdemotest CustomScriptExtension Microsoft.Compute 1.4 -i '{"fileUris":[""], "commandToExecute": "powershell -ExecutionPolicy Unrestricted -file createFolder.ps1" }'

Automating server creation and provisioning

The goal is to create the windows server and provision it without any user input. To do that we will create the blueprint of the infrastructure by using a macfile.


A macfile contains the blueprint of the whole application, including the rules required to create, destroy and maintain environments through the lifecycle.

1 - Install Manageacloud Command Line Interface

curl -sSL | bash

Do you want all the software pre-installed ?

To make easier to try this post, we are offering free Linux terminals with Azure and Manageacloud CLI pre-installed.

2 - Register an user and login

mac login

3 - Save azure.macfile, the file that knows how to create and provision the windows server

curl > azure.macfile

4 - Execute the macfile to create and provision the windows server

mac infrastructure macfile azure.macfile

5 - Destroy the infrastructure

mac infrastructure destroy azure_demo 1.1

Managing infrastructures

In this example the infrastructure consist in two resources: the network and the Windows server. Once the macfile is defined, both resources are associated, and you can create and destroy infrastructures easily.

Automation Services

Manageacloud can help you to automate your application within days. Contact us for more information.

Dockerising Puppet

Learn how to use Puppet to manage Docker containers. This post contains complementary technical details to the talk on 23th of April at the Puppet Camp in Sydney.

Manageacloud is a company that specialises in multi-cloud orchestration. Please contact us if you want to know more.



The goal is to manage the configuration of Docker containers using existing puppet modules and Puppet Enterprise. We will use the example of a Wordpress application and two different approaches:

  • Fat containers: treating the container as a virtual machine
  • Microservices: one process per container, as originally recommended by Docker


Docker Workflow



1 - Dockerfile

Dockerfile is the "source code" of the container image:

  • It uses imperative programming, which means we need specify every command, tailored to the target distribution, to achieve the desired state.
  • It is very similar to bash; if you know bash, you know how to use a Dockerfile
  • In large and complex architectures, the goal of the Dockerfile is to hook a configuration management system like puppet to install the required software and configure the container.

For example, this is a Dockerfile that will create a container image with Apache2 installed in Ubuntu:

FROM ubuntu
MAINTAINER Ruben Rubio Rey <>
RUN apt-get update
RUN apt-get install apache2


2 - Container Image

The container image is generated from the Dockerfile using docker build:

docker build -t <image_name> <directory_path_to_Dockerfile>


3 - Registry

An analogy for the Registry is that it works like a git repository. It allows you to push and pull container's images. Container images can have different versions.

The Registry is the central point to distribute Docker containers. It does not matter if you use Kubernetes, CoreOS Fleet, Docker Swarm, Mesos or you are just orchestrating in a Docker host.

For example, if you are the DevOps person within your organization, you may decide that the developers (who are already developing under Linux) will use containers instead of virtual machines for the development environment. The DevOps person should be responsible to creating the Dockerfile, building the container image and pushing it to the registry. All developers within your organization can now pull the latest version of the development environment from the registry and use it.


4 - Development Environment

Docker containers can be used in a development environment. You can make developers more comfortable with the transition to containers by using the controversial "Fat Containers" approach.


5 - Production Environment

You can orchestrate Docker containers in production for two different purposes:

  • Docker Host: Using containers as a way to distribute the configuration. This post focuses on using containers in Docker Hosts.
  • Cluster Management: Mesos, Kubernetes, Docker Swarm and CoreOS Fleet are used to manage containerised applications in clustered environments. This aims to create a layer in the top of the different available virtual machines, allowing you to manage all resources as one unified whole. Those technologies are very likely to evolve significantly over the next 12 months.


Fat Containers vs Microservices

When you are creating containers, there are three different approaches:

  • Microservices: running one single process per container.
  • Fat containers: running many processes and services in a container. In fact, you are treating the container as a virtual machine.

The problem with the microservices approach is that Linux is not really designed for microservices. If you have some processes running in a container, and one of those processes is detached from the parent, it is responsibility of the init process to recycle those resources. If those resources are not recycled, it will become a zombie process.

Some Linux applications are not designed for single process systems either:

  • Many Linux applications are designed to have a crontab daemon to run periodical tasks.
  • Many Linux applications writes vital information directly to the syslog. If the syslog daemon is not running, you might never notice those messages.

In order to use multiple processes in a container, you need to use an init process or similar. There are base images with init processes built in. For example for ubuntu and debian.

What to use ? My advice is to be pragmatic; no one size fits all. Your goal is to solve business problems without creating technical debt. If fat containers better suits your business need, use it. However if microservices fits better, use that instead. Ideally, you should know how to use both, and analyse the case in point to decide what is best for your company. There are no technical reasons to use one over the other.


Managing Docker Containers with Puppet

When we use Puppet (or any other configuration management system) to manage Docker containers, there are two sets of tasks: container creation and container orchestration.


Container Creation

  1. The Dockerfile installs the puppet clients and invokes the puppet master to retrieve the container's configuration
  2. The new image is pushed to the registry


Container Orchestration

  1. Docker's host puppet agent invokes the puppet master to get the configuration
  2. The puppet agent identifies a set of containers. Those containers must be pulled from the Docker registry
  3. The puppet agent pulls, configures and starts the Docker containers in the Docker host


Puppet Master Configuration

For this configuration, we are assuming that Puppet Master is running in a private network, where all the clients are secure. This allows us to use the configuration setting autosign = true in the master's puppet.conf.


Docker Registry

The Docker registry is like a "git repository" for containers. You can push and pull containers. Containers might have a version number. You can use a provider for the Docker registry or you can install one yourself. For this example we will use the module garethr/docker from the PuppetForge to create our docker-registry puppet manifest:

class docker-registry {

    include 'docker'

    docker::run { 'local-registry':

        # Name of the container in Docker Hub
        image => 'registry',

        # We are mapping a port from the Docker host to the container.
        # If you don't do that you cannot have access
        # to the services available in the container
        ports           => ['5000:5000'],

        # We send the configuration parameters that are required to configure a insecure version of a local registry
        env             => ['SETTINGS_FLAVOR=dev', 'STORAGE_PATH=/var/docker-registry/local-registry'],

        # Containers are stateless. If you modify the filesystem
        # you are creating a new container.
        # If we want to push containers, we need a
        # persistent layer somewhere.
        # For this case, in order to have a persistent layer,
        # we are mapping a folder in the host with a folder in the container
        volumes         => ['/var/docker-registry:/var/docker-registry'],


Please note that this installs an insecure Docker registry for testing purposes only.


Fat Containers Approach

For this example, I am using a fat container as I am considering the development environment for the developers within my organization. How fat containers works is very similar to virtual machines, and the learning curve will be close to zero. If the developers are already using Linux, using containers will remove the overhead of the hypervisor and their computer will be faster immediately.

This fat container will contain the following services:

  • Provided by the base image:
    • init
    • syslog
    • crontab
    • ssh
  • Provided by Puppet:
    • mysql
    • apache2 (along with Wordpress codebase)

Dockerfile will create the container Wordpress Fat Container. This is the content:

FROM phusion/baseimage
MAINTAINER Ruben Rubio Rey  ""

# Activate AU mirrors
COPY files/ /etc/apt/sources.list

# Install puppet client using Puppet Enterprise
RUN curl -k | bash

# Configure puppet client (Just removed the last line for the "certname")
COPY files/puppet.conf /etc/puppetlabs/puppet/puppet.conf

# Apply puppet changes. Note certname, we are using "wordpress-image-"
# and three random characters.
#  - "wordpress-image-" allows Puppet Enterprise
# to identify which classes must be applied
#  - The three random characters are used to
# avoid conflict with the node certificates
RUN puppet agent --debug --verbose --no-daemonize --onetime --certname wordpress-image-`date +%s | sha256sum | head -c 3; echo `

# Enable SSH - As this is meant to be a development environment,
# SSH might be useful to the developer
# This is needed for phusion/baseimage only
RUN rm -f /etc/service/sshd/down

# Change root password - even if we use key authentication
# knowing the root's password is useful for developers
RUN echo "root:mypassword" | chpasswd

# We enable the services that puppet is installing
COPY files/init /etc/my_init.d/10_init_services
RUN chmod +x /etc/my_init.d/10_init_services

When we are building the Docker container, it will request the configuration from the Puppet Master using the certname "wordpress-image-XXX" being XXX random characters.

Puppet master returns the following manifest:

class wordpress-all-in-one {

  # Problems using official mysql from Puppet Forge
  # If you try to install mysql using package {"mysql": ensure => installed }
  # it crashes. It tries to do something with the init process
  # and this container does not have a
  # fully featured init process. "mysql-noinit" installs
  # mysql without any init dependency.
  # note that although we cannot use mysql Puppet Forge
  # module to install the software, we can use
  # the types to create database, create user
  # and grant permissions
  include "mysql-noinit"

  # Fix unsatisfied requirements in Wordpress class.
  # hunner/wordpress module assumes that
  # wget is installed in the system. However,
  # containers by default has minimal software
  # installed.
  package {"wget": ensure => latest}

  # hunner/wordpress,
  # removing any task related with
  # the database (it will crash when
  # checking if mysql package is installed)
  class { 'wordpress':
    install_dir => '/var/www/wordpress',
    db_user     => 'wp_user',
    db_password => 'password',
    create_db   => false,
    create_db_user => false

  # Ad-hoc apache configuration
  # installs apache, php and adds the
  # virtual server wordpress.conf
  include "apache-wordpress"

Build the container image:

docker build -t puppet_wordpress_all_in_one /path/to/Dockerfile_folder/

Push the image to the registry

docker tag puppet_wordpress_all_in_one
docker push

Orchestrate the container

To orchestrate the fat container in a Docker host:

class container-wordpress-all-in-one {

    class { 'docker':
        extra_parameters=> ['--insecure-registry']

    docker::run { 'wordpress-all-in-one':

        # image is fetched from the Registry
        image => '',

        # The fat container is mapping the port 80 from the docker host to
        # the container's port 80
        ports => ['80:80'],


Microservices Approach

Now we are going to use as much as possible of the existing code using the Microservices Architecture approach. For this approach we will have two containers, a DB container running MySQL and a WEB container running Apache2.


1 - MySQL (DB) Microservice Container

As usual, we use the Dockerfile to build the Docker image.
Dockerfiles are very similar. I will highlight the changes.

# This time we are using the Docker Official image Ubuntu (no init process)
FROM ubuntu
MAINTAINER Ruben Rubio Rey ""

# Activate AU mirrors
COPY files/ /etc/apt/sources.list

# This base image does not have curl installed
RUN apt-get update && apt-get install -y curl

# Install puppet client
RUN curl -k | bash

# Configure puppet client
COPY files/puppet.conf /etc/puppetlabs/puppet/puppet.conf

# Apply puppet changes. We change the certname
# so Puppet Master knows what configuration to retrieve.
RUN puppet agent --debug --verbose --no-daemonize --onetime --certname ms-mysql-image-`date +%s | sha256sum | head -c 3; echo `

# Expose MySQL to Docker network
# We are notifying the Docker network that there is a container

# that has a service and other containers might need it

The class returned by Puppet Master is wordpress-ms-mysql. You will notice that this class is exactly the same as the fat container, but anything that is not related to the database is commented out.

class wordpress-mysql-ms {

    # Install MySQL
    include "mysql-noinit"

    # Unsatisfied requirements in wordpress class
    # package {"wget": ensure => latest}

    # Puppet forge wordpress class, removing mysql
    # class { 'wordpress':
    #   install_dir => '/var/www/wordpress',
    #   db_user => 'wp_user',
    #   db_password => 'password',

    # Apache configuration not needed
    # include "apache-wordpress"

Build the container

docker build -t puppet_ms_mysql .

Push the container to the registry

docker tag puppet_ms_mysql
sudo docker push


2 - Apache (WEB) Microservice Container

Once more, we use the Dockerfile to build the image. The file is exactly the same as the MySQL, except for a few lines that are highlighted.

FROM ubuntu
MAINTAINER Ruben Rubio Rey ""

# Activate AU mirrors
COPY files/ /etc/apt/sources.list

# Install CURL
RUN apt-get update && apt-get install -y curl

# Install puppet client
RUN curl -k | bash

# Configure puppet client
COPY files/puppet.conf /etc/puppetlabs/puppet/puppet.conf

# Apply puppet changes
RUN puppet agent --debug --verbose --no-daemonize --onetime --certname ms-apache-image-`date +%s | sha256sum | head -c 3; echo `

# Apply patch to link container.
# We have to tell Wordpress where
# mysql service is running,
# using a system environment variable
# (Explanation in the next section)

# If we are using Puppet for microservices
# we should update the Wordpress module
# to set this environment variable.
# In this case, I am exposing the changes so
# it is easier to see what is changing.

RUN apt-get install patch -y
COPY files/wp-config.patch /var/www/wordpress/wp-config.patch

RUN cd /var/www/wordpress && patch wp-config.php < wp-config.patch

# We configure PHP to read system environment variables
COPY files/90-env.ini /etc/php5/apache2/conf.d/90-env.ini

The class returned by Puppet Master is wordpress-apache-ms. You will notice that it is very similar to wordpress-ms-mysql and to the one used by the fat container wordpress-all-in-one. The difference is that everything related with mysql is commented out and everything related with wordpress and apache is executed.

class wordpress-apache-ms {

    # MySQL won't be installed here
    # include "mysql-noinit"

    # Unsatisfied requirements in wordpress class
    package {"wget": ensure => latest}

    # Puppet forge wordpress class, removing mysql
    class { 'wordpress':
        install_dir => '/var/www/wordpress',
        db_user => 'wp_user',
        db_password => 'password',
        create_db => false,
        create_db_user => false

    # Ad-hoc apache configuration
    include "apache-wordpress"



3 - Orchestrating Web and DB Microservice

The Puppet class that orchestrates both microservies is called container-wordpress-ms:

class container-wordpress-ms {

    # Make sure that Docker is installed
    # and that it can get images from our insecure registry
    class { 'docker':
        extra_parameters=> ['--insecure-registry']

    # Container DB will run MySQL
    docker::run { 'db':
        # The image is taken from the registry
        image => '',
        command => '/usr/sbin/mysqld --bind-address=',
        use_name => true

    # Container WEB will run Apache
    docker::run { 'web':
        # The image is taken from the Registry
        image => '',
        command => '/usr/sbin/apache2ctl -D FOREGROUND',
        # We are mapping a port between the Docker Host and the Apache container.
        ports => ['80:80'],
        # We link WEB container to DB container. This will allow WEB to access to the
        # services exposed under DB container (in this case 3306)
        links => ['db:db'],
        use_name => true,
       # We need DB container up and running before running WEB.
        depends => ['db'],


APPENDIX I: Linking containers

When we are linking containers in the microservices approach we are are performing the following tasks


Starting "db" container:

This will start puppet_ms_mysql, named as db container. Please note that puppet_ms_mysql is exposing the port 3306, which notifies Docker that this container has a service that might be useful for other containers.

docker run --name db -d puppet_ms_mysql /usr/sbin/mysqld --bind-address=


Starting "web" container

Now we want to start the container puppet_ms_apache, named as web .

If we link the containers and execute the command env the folllowing environment variables are created in the web container:

docker run --name web -p 1800:80 --link db:db puppet_ms_apache env

These variables point out where the mysql database is. Thus, the application should use the environment variable DB_PORT_3306_TCP_ADDR to connect to the database.

  • DB is the name of the container we are linking to
  • 3306 is the port exposed in the Dockerfile of the db container


APPENDIX II: Docker Compose

When working with microservices, you want to avoid long commands. Docker Compose makes the management of long Docker commands a lot easier. For example, this is how the Microservices approach would look with Docker Compose:

file docker-compose.yml

  image: puppet_ms_apache
  command: /usr/sbin/apache2ctl -D FOREGROUND
   - db:db
   - "80:80"
  image: puppet_ms_mysql
  command: /usr/sbin/mysqld --bind-address=


and you can execute both contianers with the command docker-compose up

DevOps Automation Services

Since we launched in 2014, we have assisted numerous companies, opensource projects and individuals, in learning, experimenting and using automation tools that nowadays define operations. Many things are changing in this area.

We have helped many people to achieve their automation goals, and we are happy to see how their operational costs are reduced and how productivity is increased.

Do you need help with DevOps and automation ? Don't hesitate to contact us at You can also find more information at

Stay tuned! Very soon we will release a new of tools that will make your life in operations even easier.