Dockerising Puppet

Learn how to use Puppet to manage Docker containers. This post contains complementary technical details to the talk on 23th of April at the Puppet Camp in Sydney.

Manageacloud is a company that specialises in multi-cloud orchestration. Please contact us if you want to know more.



The goal is to manage the configuration of Docker containers using existing puppet modules and Puppet Enterprise. We will use the example of a Wordpress application and two different approaches:

  • Fat containers: treating the container as a virtual machine
  • Microservices: one process per container, as originally recommended by Docker


Docker Workflow



1 - Dockerfile

Dockerfile is the "source code" of the container image:

  • It uses imperative programming, which means we need specify every command, tailored to the target distribution, to achieve the desired state.
  • It is very similar to bash; if you know bash, you know how to use a Dockerfile
  • In large and complex architectures, the goal of the Dockerfile is to hook a configuration management system like puppet to install the required software and configure the container.

For example, this is a Dockerfile that will create a container image with Apache2 installed in Ubuntu:

FROM ubuntu
MAINTAINER Ruben Rubio Rey <>
RUN apt-get update
RUN apt-get install apache2


2 - Container Image

The container image is generated from the Dockerfile using docker build:

docker build -t <image_name> <directory_path_to_Dockerfile>


3 - Registry

An analogy for the Registry is that it works like a git repository. It allows you to push and pull container's images. Container images can have different versions.

The Registry is the central point to distribute Docker containers. It does not matter if you use Kubernetes, CoreOS Fleet, Docker Swarm, Mesos or you are just orchestrating in a Docker host.

For example, if you are the DevOps person within your organization, you may decide that the developers (who are already developing under Linux) will use containers instead of virtual machines for the development environment. The DevOps person should be responsible to creating the Dockerfile, building the container image and pushing it to the registry. All developers within your organization can now pull the latest version of the development environment from the registry and use it.


4 - Development Environment

Docker containers can be used in a development environment. You can make developers more comfortable with the transition to containers by using the controversial "Fat Containers" approach.


5 - Production Environment

You can orchestrate Docker containers in production for two different purposes:

  • Docker Host: Using containers as a way to distribute the configuration. This post focuses on using containers in Docker Hosts.
  • Cluster Management: Mesos, Kubernetes, Docker Swarm and CoreOS Fleet are used to manage containerised applications in clustered environments. This aims to create a layer in the top of the different available virtual machines, allowing you to manage all resources as one unified whole. Those technologies are very likely to evolve significantly over the next 12 months.


Fat Containers vs Microservices

When you are creating containers, there are three different approaches:

  • Microservices: running one single process per container.
  • Fat containers: running many processes and services in a container. In fact, you are treating the container as a virtual machine.

The problem with the microservices approach is that Linux is not really designed for microservices. If you have some processes running in a container, and one of those processes is detached from the parent, it is responsibility of the init process to recycle those resources. If those resources are not recycled, it will become a zombie process.

Some Linux applications are not designed for single process systems either:

  • Many Linux applications are designed to have a crontab daemon to run periodical tasks.
  • Many Linux applications writes vital information directly to the syslog. If the syslog daemon is not running, you might never notice those messages.

In order to use multiple processes in a container, you need to use an init process or similar. There are base images with init processes built in. For example for ubuntu and debian.

What to use ? My advice is to be pragmatic; no one size fits all. Your goal is to solve business problems without creating technical debt. If fat containers better suits your business need, use it. However if microservices fits better, use that instead. Ideally, you should know how to use both, and analyse the case in point to decide what is best for your company. There are no technical reasons to use one over the other.


Managing Docker Containers with Puppet

When we use Puppet (or any other configuration management system) to manage Docker containers, there are two sets of tasks: container creation and container orchestration.


Container Creation

  1. The Dockerfile installs the puppet clients and invokes the puppet master to retrieve the container's configuration
  2. The new image is pushed to the registry


Container Orchestration

  1. Docker's host puppet agent invokes the puppet master to get the configuration
  2. The puppet agent identifies a set of containers. Those containers must be pulled from the Docker registry
  3. The puppet agent pulls, configures and starts the Docker containers in the Docker host


Puppet Master Configuration

For this configuration, we are assuming that Puppet Master is running in a private network, where all the clients are secure. This allows us to use the configuration setting autosign = true in the master's puppet.conf.


Docker Registry

The Docker registry is like a "git repository" for containers. You can push and pull containers. Containers might have a version number. You can use a provider for the Docker registry or you can install one yourself. For this example we will use the module garethr/docker from the PuppetForge to create our docker-registry puppet manifest:

class docker-registry {

    include 'docker'

    docker::run { 'local-registry':

        # Name of the container in Docker Hub
        image => 'registry',

        # We are mapping a port from the Docker host to the container.
        # If you don't do that you cannot have access
        # to the services available in the container
        ports           => ['5000:5000'],

        # We send the configuration parameters that are required to configure a insecure version of a local registry
        env             => ['SETTINGS_FLAVOR=dev', 'STORAGE_PATH=/var/docker-registry/local-registry'],

        # Containers are stateless. If you modify the filesystem
        # you are creating a new container.
        # If we want to push containers, we need a
        # persistent layer somewhere.
        # For this case, in order to have a persistent layer,
        # we are mapping a folder in the host with a folder in the container
        volumes         => ['/var/docker-registry:/var/docker-registry'],


Please note that this installs an insecure Docker registry for testing purposes only.


Fat Containers Approach

For this example, I am using a fat container as I am considering the development environment for the developers within my organization. How fat containers works is very similar to virtual machines, and the learning curve will be close to zero. If the developers are already using Linux, using containers will remove the overhead of the hypervisor and their computer will be faster immediately.

This fat container will contain the following services:

  • Provided by the base image:
    • init
    • syslog
    • crontab
    • ssh
  • Provided by Puppet:
    • mysql
    • apache2 (along with Wordpress codebase)

Dockerfile will create the container Wordpress Fat Container. This is the content:

FROM phusion/baseimage
MAINTAINER Ruben Rubio Rey  ""

# Activate AU mirrors
COPY files/ /etc/apt/sources.list

# Install puppet client using Puppet Enterprise
RUN curl -k | bash

# Configure puppet client (Just removed the last line for the "certname")
COPY files/puppet.conf /etc/puppetlabs/puppet/puppet.conf

# Apply puppet changes. Note certname, we are using "wordpress-image-"
# and three random characters.
#  - "wordpress-image-" allows Puppet Enterprise
# to identify which classes must be applied
#  - The three random characters are used to
# avoid conflict with the node certificates
RUN puppet agent --debug --verbose --no-daemonize --onetime --certname wordpress-image-`date +%s | sha256sum | head -c 3; echo `

# Enable SSH - As this is meant to be a development environment,
# SSH might be useful to the developer
# This is needed for phusion/baseimage only
RUN rm -f /etc/service/sshd/down

# Change root password - even if we use key authentication
# knowing the root's password is useful for developers
RUN echo "root:mypassword" | chpasswd

# We enable the services that puppet is installing
COPY files/init /etc/my_init.d/10_init_services
RUN chmod +x /etc/my_init.d/10_init_services

When we are building the Docker container, it will request the configuration from the Puppet Master using the certname "wordpress-image-XXX" being XXX random characters.

Puppet master returns the following manifest:

class wordpress-all-in-one {

  # Problems using official mysql from Puppet Forge
  # If you try to install mysql using package {"mysql": ensure => installed }
  # it crashes. It tries to do something with the init process
  # and this container does not have a
  # fully featured init process. "mysql-noinit" installs
  # mysql without any init dependency.
  # note that although we cannot use mysql Puppet Forge
  # module to install the software, we can use
  # the types to create database, create user
  # and grant permissions
  include "mysql-noinit"

  # Fix unsatisfied requirements in Wordpress class.
  # hunner/wordpress module assumes that
  # wget is installed in the system. However,
  # containers by default has minimal software
  # installed.
  package {"wget": ensure => latest}

  # hunner/wordpress,
  # removing any task related with
  # the database (it will crash when
  # checking if mysql package is installed)
  class { 'wordpress':
    install_dir => '/var/www/wordpress',
    db_user     => 'wp_user',
    db_password => 'password',
    create_db   => false,
    create_db_user => false

  # Ad-hoc apache configuration
  # installs apache, php and adds the
  # virtual server wordpress.conf
  include "apache-wordpress"

Build the container image:

docker build -t puppet_wordpress_all_in_one /path/to/Dockerfile_folder/

Push the image to the registry

docker tag puppet_wordpress_all_in_one
docker push

Orchestrate the container

To orchestrate the fat container in a Docker host:

class container-wordpress-all-in-one {

    class { 'docker':
        extra_parameters=> ['--insecure-registry']

    docker::run { 'wordpress-all-in-one':

        # image is fetched from the Registry
        image => '',

        # The fat container is mapping the port 80 from the docker host to
        # the container's port 80
        ports => ['80:80'],


Microservices Approach

Now we are going to use as much as possible of the existing code using the Microservices Architecture approach. For this approach we will have two containers, a DB container running MySQL and a WEB container running Apache2.


1 - MySQL (DB) Microservice Container

As usual, we use the Dockerfile to build the Docker image.
Dockerfiles are very similar. I will highlight the changes.

# This time we are using the Docker Official image Ubuntu (no init process)
FROM ubuntu
MAINTAINER Ruben Rubio Rey ""

# Activate AU mirrors
COPY files/ /etc/apt/sources.list

# This base image does not have curl installed
RUN apt-get update && apt-get install -y curl

# Install puppet client
RUN curl -k | bash

# Configure puppet client
COPY files/puppet.conf /etc/puppetlabs/puppet/puppet.conf

# Apply puppet changes. We change the certname
# so Puppet Master knows what configuration to retrieve.
RUN puppet agent --debug --verbose --no-daemonize --onetime --certname ms-mysql-image-`date +%s | sha256sum | head -c 3; echo `

# Expose MySQL to Docker network
# We are notifying the Docker network that there is a container

# that has a service and other containers might need it

The class returned by Puppet Master is wordpress-ms-mysql. You will notice that this class is exactly the same as the fat container, but anything that is not related to the database is commented out.

class wordpress-mysql-ms {

    # Install MySQL
    include "mysql-noinit"

    # Unsatisfied requirements in wordpress class
    # package {"wget": ensure => latest}

    # Puppet forge wordpress class, removing mysql
    # class { 'wordpress':
    #   install_dir => '/var/www/wordpress',
    #   db_user => 'wp_user',
    #   db_password => 'password',

    # Apache configuration not needed
    # include "apache-wordpress"

Build the container

docker build -t puppet_ms_mysql .

Push the container to the registry

docker tag puppet_ms_mysql
sudo docker push


2 - Apache (WEB) Microservice Container

Once more, we use the Dockerfile to build the image. The file is exactly the same as the MySQL, except for a few lines that are highlighted.

FROM ubuntu
MAINTAINER Ruben Rubio Rey ""

# Activate AU mirrors
COPY files/ /etc/apt/sources.list

# Install CURL
RUN apt-get update && apt-get install -y curl

# Install puppet client
RUN curl -k | bash

# Configure puppet client
COPY files/puppet.conf /etc/puppetlabs/puppet/puppet.conf

# Apply puppet changes
RUN puppet agent --debug --verbose --no-daemonize --onetime --certname ms-apache-image-`date +%s | sha256sum | head -c 3; echo `

# Apply patch to link container.
# We have to tell Wordpress where
# mysql service is running,
# using a system environment variable
# (Explanation in the next section)

# If we are using Puppet for microservices
# we should update the Wordpress module
# to set this environment variable.
# In this case, I am exposing the changes so
# it is easier to see what is changing.

RUN apt-get install patch -y
COPY files/wp-config.patch /var/www/wordpress/wp-config.patch

RUN cd /var/www/wordpress && patch wp-config.php < wp-config.patch

# We configure PHP to read system environment variables
COPY files/90-env.ini /etc/php5/apache2/conf.d/90-env.ini

The class returned by Puppet Master is wordpress-apache-ms. You will notice that it is very similar to wordpress-ms-mysql and to the one used by the fat container wordpress-all-in-one. The difference is that everything related with mysql is commented out and everything related with wordpress and apache is executed.

class wordpress-apache-ms {

    # MySQL won't be installed here
    # include "mysql-noinit"

    # Unsatisfied requirements in wordpress class
    package {"wget": ensure => latest}

    # Puppet forge wordpress class, removing mysql
    class { 'wordpress':
        install_dir => '/var/www/wordpress',
        db_user => 'wp_user',
        db_password => 'password',
        create_db => false,
        create_db_user => false

    # Ad-hoc apache configuration
    include "apache-wordpress"



3 - Orchestrating Web and DB Microservice

The Puppet class that orchestrates both microservies is called container-wordpress-ms:

class container-wordpress-ms {

    # Make sure that Docker is installed
    # and that it can get images from our insecure registry
    class { 'docker':
        extra_parameters=> ['--insecure-registry']

    # Container DB will run MySQL
    docker::run { 'db':
        # The image is taken from the registry
        image => '',
        command => '/usr/sbin/mysqld --bind-address=',
        use_name => true

    # Container WEB will run Apache
    docker::run { 'web':
        # The image is taken from the Registry
        image => '',
        command => '/usr/sbin/apache2ctl -D FOREGROUND',
        # We are mapping a port between the Docker Host and the Apache container.
        ports => ['80:80'],
        # We link WEB container to DB container. This will allow WEB to access to the
        # services exposed under DB container (in this case 3306)
        links => ['db:db'],
        use_name => true,
       # We need DB container up and running before running WEB.
        depends => ['db'],


APPENDIX I: Linking containers

When we are linking containers in the microservices approach we are are performing the following tasks


Starting "db" container:

This will start puppet_ms_mysql, named as db container. Please note that puppet_ms_mysql is exposing the port 3306, which notifies Docker that this container has a service that might be useful for other containers.

docker run --name db -d puppet_ms_mysql /usr/sbin/mysqld --bind-address=


Starting "web" container

Now we want to start the container puppet_ms_apache, named as web .

If we link the containers and execute the command env the folllowing environment variables are created in the web container:

docker run --name web -p 1800:80 --link db:db puppet_ms_apache env

These variables point out where the mysql database is. Thus, the application should use the environment variable DB_PORT_3306_TCP_ADDR to connect to the database.

  • DB is the name of the container we are linking to
  • 3306 is the port exposed in the Dockerfile of the db container


APPENDIX II: Docker Compose

When working with microservices, you want to avoid long commands. Docker Compose makes the management of long Docker commands a lot easier. For example, this is how the Microservices approach would look with Docker Compose:

file docker-compose.yml

  image: puppet_ms_apache
  command: /usr/sbin/apache2ctl -D FOREGROUND
   - db:db
   - "80:80"
  image: puppet_ms_mysql
  command: /usr/sbin/mysqld --bind-address=


and you can execute both contianers with the command docker-compose up

Written by Ruben Rubio Rey on Thursday April 23, 2015
Permalink - Tags: puppet, docker, devops

« Azure: Create and Provision Windows Servers Automatically - DevOps Automation Services »