ManageaCloud Community Version released

Hello friends of ManageaCloud!

We are happy to announce the release of ManageaCloud Community Version, but we have more news!

What is ManageaCloud ?

ManageaCloud is a cloud orchestration framework. Our heart is technology agnostic which allows easy integration with any type of clouds and technologies.

This system allows you to create server configurations, where you configure how the different servers that compose your infrastructure works. Server configurations are the commands, that builds an application server, a database or an email server.

If you are already using a configuration management system such as Puppet, Ansible or Docker containers, you can use it to build you servers. For more information, you can read about how to orchestrate a Joomla server using Docker Compose available in the quick start guides.

Once the different servers are configured, you can use the infrastructure template (aka MaC Framework), to define the blue print of other infrastructure elements: auto-scaling groups, load balancers, golden images, dns and more.

The MaC Framework runs commands in the command line interface (you can see examples in our quickstart guides) for the different technologies that you require. It is extremely flexible and easy to use.

Cloud Libraries

This new feature has been released along ManageaCloud Community Version.  It has the ability to use libraries to reuse code so complex orchestration is even easier. We have released our first library for AWS, that allows to create an application server, a golden image, load balancer, autoscaling groups, launch configurations and cloudwatch alerts with a single command. All you need is a simple macfile:

mac: 1.0.0
description: Scaled and Load-Balanced nginx
name: {INF_NAME}
version: {INF_VERSION}
parents: # we define what libraries we want to use
    instance create:
    # this is the server configuration. In this case we are writting the commands directly,
    # but we can separate this code by using a Server Configuration.
      bootstrap bash: |
        sudo apt-get update
        sudo apt-get install nginx -y
infrastructures:  # this sets the order or the orchestration
  # create E2C instance using the configuration for role 'app'
    name: app
    provider: amazon
    location: {AWS_REGION}
    hardware: t1.micro
    role: app
    release: ubuntu:trusty
    amount: 1
# we customise a default value defined in the parent aws
      desired-capacity: 1
# we accept all other default configuration

The AWS library defines the commands that has to execute to create and destroy every resource. The previous macfile is a demo that creates an Scaled and Load Balanced nginx, and it is available on GitHub. This allows you to create the whole infrastructure executing a single command:

mac -s infrastructure macfile -p INF_NAME=demo INF_VERSION=1 AWS_REGION=us-east-1

The beauty of this technology is that:
 - You have full visibility about how it works.
 - You can easily extend the functionality or customise it for your own application.
 - You can easily integrate macfiles to operate with a great variety of strategies (such as Continuous Delivery and blue green deployments) and it is extremely easy.
 - You have full visibility of the history of what happened: logs, output commands and exit codes.

Would you like to collaborate ? Create your library and send a pull request to GitHub! Feel free to contact us if you have questions.

Blue Green Deployments

This technology is flexible and powerful and works extremely well for blue green deployments. Every time you execute a macfile, it associates a name and a version, which gives you full control to know what is running in your cloud infrastructure, and you can use it to easlity organise flexible blue green deployments.

Macfiles and Dependencies

Every macfile has a dependency with the environment: in the previous example you need the AWS cli properly installed and configured with the right credentials to execute the creation of the infrastructure. That's why we designed triggers, where you can isolate an action (such as the creation or the destruction of an infrastructure) to a single POST into an URL.

For example, a trigger installs and configures the environment required for the macfile to run (installing and configuring the AWS CLI) and then runs mac -s infrastructure macfile command, isolating any dependency that allows you to create a fully functional production ready environment.

As this feature requires a cluster to run containers, it is not available in the Community Version, but you can use it for free of charge with the Start-Up or Business plans.

ManageaCloud Community Version

You have requested and we have delivered! Many organisations wants the freedom and the flexibility to use ManageaCloud in their own infrastructure. That's why, as an alternative to the Start-Up and Business plans, we have are happy to announce the release of ManageaCloud Community Version. It is a free to use. The standalone version of ManageaCloud allows you to install ManageaCloud in your own infrastructure.

ManageaCloud CLI is now Open Source

This release of ManageaCloud allows us to announce the stable version of the ManageaCloud Command Line Interface, a powerful tool that allows you to manage your infrastructure from the terminal. We are currenntly opening our system! Fork us on GitHub to support us!


Manageacloud at LinuxConf 2016

Several team members of Manageacloud joined LinuxConf 2016 at Melbourne. Manageacloud had presence in several talks as new emerging technology.

Managing Infrastructure as Code, by Allan Shone

Description of the different technologies available to automate the deployment of servers and infrastructures, by analising Ansible, Chef, Puppet, CloudFormation, Terraform and Manageacloud.



Continuous Delivery Using Blue-Green Deployments and Immutable Infrastructure, by Ruben Rubio Rey



I do realise that the Q&A could be enhanced, and I decided to rewrite the questions with more accurate answers.

1) When you are modifying a column on a table in the database, blue green deployments requires to create the new column name, synchronise the information between the old and the new column and then destroy the old column. What solutions are available for the data synchronisation while you are performing a blue green deployment ?

You can solve the data synchronisation problem in two different ways:
 - From the database, using triggers to synchronise the information between two columns
 - From the application, which requires an extra deployment. First, you need to deploy the version that works with both columns, and synchronise the data. Then you need to deploy the version that operates with the new column only, allowing to safely delete the old version from the table.

2) What do you recommend for backing up the database ?

If you have a backup strategy for your current deployments, blue green deployments doesn't require anything additional.

Please note that when you are working with blue green deployments you can deploy the database changes in advance from the code changes. This can be easily synchronised with a backup strategy to make sure that data won't be lost if things goes wrong when performing the upgrade.

3) In the Framework Approach, how do you cope if you change cloud providers ?

Service Wrappers, in theory, are able to find the common denominator between cloud suppliers, so swapping cloud supplier should be easy. In practice, cloud suppliers are so different that the code that you write is very specific for the given cloud supplier. Which means, that if you use the service wrapper, you need to customise the code depending on the cloud supplier that you are using.

The Framework Approach requires to update the CLI for the different actions, which, in practice, is not that different from the existing Service Wrappers.

4) If you use the Framework approach you need to keep up with the new updates from the cloud supplier yourself, while if you use the Service Wrappers those updates will be performed by the company which maintains the wrapper. What is the difference between the Service Wrapper and the Framework approach ?

Service Wrappers keeps up to date with the new features released by the cloud supplier by upgrading the wrapper. Therefore, you don't really know when the new features will be available for you.

The Framework Approach uses the Command Line Interface, and you need to maintain the different commands with subsequent versions. But please note that:
 - Cloud Suppliers tends to be backwards compatible, therefore they add features rather than modifying existing ones.
 - You have available of new features right away
 - You can always define what version of the CLI is compatible with your existing infrastructure blueprint, which allows you to have control full control over upgrades.

There is no additional disadvantages, and potential advantages, if you decide to use the Framework Approach.


5) What you been burn before by a staging environment that was not able to capture the problems ?

The objective of Blue Green deployments is wider than just capturing problems on live deployments. You ensure zero downtime, it allows you to have an standard procedure to roll back and ensures complete documentation about of how the production infrastructure works.

Rolling back is not a matter of 'if' but a matter of 'when'. If you do a great job you won't have to roll back often. If you do a poor job the roll back ratio will be higher. Experience shows that the need of rolling back is a matter of time, that is why the smart thing is to be ready when you are required to do so.

For further questions don't hesitate to contact me at ruben at manageacloud dot com. For more information about the Framework Approach, please visit


Deploying Containers using Docker Compose

Docker Compose is a tool that allows you to deploy an application composed by multiple containers. For example, imagine that you want to deploy Joomla. It requires two containers, the web application (which it also runs the web server apache) and the database.

Deploying using the docker CLI

First, let's deploy Joomla using the Docker command line interface.

1 - Run the database container. We do it first because the database is a dependency of the application.

docker run --name db -e MYSQL_ROOT_PASSWORD=my-secret-pw mysql

2 - Run the application container, link it to the database and map the port 80

docker run --name my-joomla --link db:mysql -d -p 80:80 joomla

Executing containers in the background

The previous commands allows you to run the containers and all the logs will be shown in the terminal. If you want to run the containers in the background, you need to add the parameter -d

docker run -d --name  db -e MYSQL_ROOT_PASSWORD=my-secret-pw mysql

Stopping containers

If you want to stop and delete a running container, allowing you to start another container with the same name, you just need to execute the following command. The last parameter is the container name.

docker rm -f db

NOTE: you can get the name of all running containers executing the command docker ps )

Installing Docker

You can install Docker by executing the following command:

curl -sSL | sh

Deploying using Docker Compose

Docker compose makes things more simple. It allows you to create a simple YAML file that will contain everything required to orchestrate both containers. Let's run the previous example using Docker Compose.

Create a file called compose-joomla.yml with the following content:

  image: joomla
    - db:mysql
    - 80:80
  image: mysql

and run docker-compose up

docker-compose up

Installing Docker Compose

If you do not have Docker Compose installed yet, you can do it by executing the following commands:

curl -sL`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Deploying Containers in the Cloud using Manageacloud

Now that we know how to run our application, we can use Manageacloud to deploy the containers in cloud environments, such as Amazon Web Services, Google Compute Engine, DigitalOcean, Rackspace and more.

First, in our account, we create a server configuration, called docker_compose_joomla, using shell, for Ubuntu 14.04 and using the following content:

set -x # enable debug

# install docker curl -sSL | sh
# install docker compose curl -sL`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose
# add the configuration for joomla using Docker Compose mkdir ~/compose-joomla cat > ~/compose-joomla/docker-compose.yml << EOL joomla:   image: joomla   links:     - db:mysql   ports:     - 80:80 db:   image: mysql   environment:     MYSQL_ROOT_PASSWORD: example EOL
# execute both containers cd ~/compose-joomla/ && /usr/local/bin/docker-compose up -d

Running applications in the cloud

A server configuration is everything that is required to run applications in the cloud using Manageacloud.

Deploying from the web interface

There are two ways to deploy a server from the web interface:
- By clicking "Quick Deployment" from the server configuration view
- By clicking "Production" or "Testing" from the advanced deployment page

Deploying using the mac cli

You can also deploy from the command line interface:

mac instance create -c docker_compose_joomla

Installing mac Command Line Interface

You can install mac cli by executing the following command:

curl -sSL | bash

Deploying using Manageacloud triggers

The mac cli and the web interface requires credentials and some other minor tweaks to run. However, if you use triggers , you will isolate all that complexity. You can deploy just by executing a POST into an URL.
1 - First, create a new trigger at you account
2 - Comment out the lines that contains the credentials:

export MAC_USER=[...]
export MAC_APIKEY[...]

3 - Add the following line to the trigger

mac instance create -c docker_compose_joomla

Now you can run the trigger, deploying the application, just executing a POST to an URL. Example:

curl -X POST

Triggers and Webhooks

Triggers are bash scripts that isolates all the credentials and complexity. Triggers are specially ideal to run in Webhooks as part of the Continuous Integration or Continuous Deployment pipeline.

Deploying using macfiles

Sometimes is just not good enough to deploy single servers as you need to use infrastructure resources such as load balancers, autoscaling groups and more.

macfile is a technology agnostic framework that allows you to integrate any technology that uses bash . Access to the quickstart guide for more information.

Deploying using API

If you want to integrate the deployment of Joomla with an application, you should use the API to create the server. For example:

$ curl -X POST -i -H "Content-Type: application/json" \
-H "Authorization: ApiKey username:myhashedpass" -d \
'{"hardware": "512mb", "cookbook_tag": "docker_compose_joomla", "location": "sfo1", }' \

Deploying using Manageacloud scripts

Sometimes you need to deploy server configurations in existing servers. In this case, you can deploy using the Manageacloud script, accessible from the deployment page. For example:

curl -sSL | bash


When should I use Manageacloud scripts ?

You can use Manageacloud script in many different cases, for example:
- In the Dockerfile to create the configuration of your container
- If you create development servers using Virtual Box
- and more


Docker Compose is a fantastic tool to deploy containers and microservices in the cloud. Using it along ManageaCloud offer us the flexibility required to deploy applications in the cloud, covering many different use cases and deployment scenarios.

Understanding the Manageacloud Systems and Environment

Manageacloud provides a simple and powerful way to manage all infrastructure, from individual servers to their configurations and network access. By having a well defined configuration file in YAML format, all parts of a system can be automatically created, confgured, and destroyed.

Within Manageacloud, all infrastructure configuration is controlled by a Macfile. This Macfile is coded using YAML, confirming to a few simple standards. By design, the instances and items within an infrastructure are completely open-ended. Simply put, command line instructions are defined with given parameters, and then used to carry out a set of steps for the infrastructure. The quickest way to get started with using a Macfile, is to run through and follow the Quickstart Guide. This guide will show a basic Macfile, the components within it, and then expand upon that with parameterisation and destruction.

A Macfile has a few sections that are used for defining a series of steps to be carried out for a particular system. Roles are used to define specific Server Configurations to be used within the infrastructure. These configurations can be parameterised, and are controlled within a Manageacloud account. Each configuration is provisioned using a specific tool or service, such as puppet, docker, or chef. For existing infrastructure already defined using these tools, these provisioners can be used to readily use the Manageacloud framework. To begin simply though, a simple shell script or the Manageacloud Sysadmin IDE can be used to easily define what a server should contain.

Server Configuration Overview


Complex systems have multiple services and infrastructure configurations, with a good potential of overlap. Using the Manageacloud Sysadmin IDE to build configurations, Blocks can be created and re-used within multiple configurations. This allows for easy base configurations that can be readily extended upon for application specific detail.

Once a configuration has been defined, it can then be assigned to a Role within an Infrastructure. This infrastructure is a YAML file, hosted within the application repository or somewhere else publicly accessible via HTTP. Infrastructure Macfiles are used to define sets of configurations for systems, using variables and arguments for environment specific options. When deploying an infrastructure, these options can be passed through, and with a specific version denoted, associated with a particular release of an application. The Manageacloud Command Line Tool provides comprehensive and easy management of Infrastructures and Instances.

Instances are running parts of any particular configuration or infrastructure. These are tangible hosts that are presently able to be interacted with, within the cloud provider configured. Typically, an instance will belong to a specific version of an infrastructure, but Manageacloud allows for specific Server Configurations to be instantiated at will using individual parameters as required. Creating a specific instance is also a great way to quickly test configuration changes, or specific application modifications in an isolated environment.

Resources make up the parts of an infrastructure that aren't explicitly servers themselves, such as Load Balancers or Autoscaling Groups. These are defined within the infrastructure, and configured as a part of a deployment process, but the software and installation isn't explicitly defined as a typical server.

Triggers form a valuable to Manageacloud and allow for comprehensive and detailed chaining of deployment and Infrastructure management. A simple trigger would be to deploy a new version of an infrastructure when an application update has completed. For instance, from a successful Continuous Integration or Deployment service, the Manageacloud trigger could be pushed to that would then complete a new deployment, for further testing or in preparation for a Blue Green Deployment. A trigger is a simple HTTP Request that has a set of actions defined, including the ability to use parameters added to the body of a POST request.

Trigger History Overview


Bringing all of these items together, managing Infrastructure can be a very simple task, and once defined, deployments are no longer a hassle and can happen within minutes. Infrastructures are defined using the Macfile, and can be instantiated via the Command Line Interface or Triggers. Instances form parts within an infrastructure, using Server Configurations or provider specific tooling. Interaction and communication between pieces of an infrastructure is then automated, and instantiated during each deployment.

There are no limits to what can be achieved using the Manageacloud tooling for Infrastructure Management!

Continuous Delivery for Java, using CircleCI and Manageacloud

I am a developer, not a sysadmin, not a devops, still one needs to understand the basics of server configurations in order to ensure that his job will work fine, so the terrible "it works in my machine" situation does not happen often.

If there is something any developer knows how to do is automation: we are use to create small programs to prevent tedious manual processes. This post shows how to unleash the power of CI with Manageacloud. In this example I will be working with a simple java application, the java-demo of manageacloud.

The target of this post is to provide easy CI integration into a java project, which generates a deployment to a new server every time code is pushed into Git, as long as the unit tests pass (because you use unit tests, right?). The unit tests runs in CircleCI, which is a free web-based integration server very convenient for this example. 

This article assumes you have read the quickstart guide and that you have installed the manageacloud mac command line interface.

Setting up CircleCI

There are essentially 2 things we need to do here

  1. Provide a correct circle.yml configuration file
  2. Launch a manageacloud trigger upon artifact build

So lets start with the circle.yml file, lets check how the file looks like

        version: openjdk7
        - "target/java-demo.war"
        - mvn clean install
        - curl \-\-data "APP_VERSION=${CIRCLE_BUILD_NUM}"

As you can see, there is nothing too weird here, the relevant parts are the one indicating that the final artifact will end up in "target/java-demo.war" and then we also have a curl command that invokes a certain url. Let me explain that a little bit: Manageacloud provides "triggers" a trigger is simply a URL that when invoked with a HTTP POST method will trigger the creation of an infraestructure (that is, a set of servers, fully configuring them) and will automatically deploy the war artifact provided in the example (in this case, the one located at "target/java-demo.war").

Please pay attention to the teardown section, there we are invoking a manageacloud trigger, we will explore that in a moment, just keep in mind that we will need to come back to the circle.yml file at the end.

To summarize, now every time a build successfully completes in circleci for your project, it will launch a trigger against manageacloud, at the same time, we will also tell manageacloud that everything is ready to deploy.

Setting up Manageacloud

Now we have CircleCI properly configured, we will configure our manageacloud settings.

There is a configuration that you can reuse here, however I strongly recommend you to follow the steps in order to understand what is going on behing the scenes.

Sign in into manageacloud, click on "server configurations" tab, and then on "New Server Configuration", in the next screen you need to provide a name for your configuration, lets use "my-ci", click on continue, now it is time to determine which technology we will use to create our server, as I said, I am not a sysadmin nor devops, so I will simply use the shell, after that, you will need to select the OS version, pick "Debian Jessie 8", after that you will be presented with some repositories (in case you have used manageacloud before), click on "skip repository", finally a screen were you can put your bash config will appear:

So, in that configuration we are going essentially to:

  1. Install java
  2. Install an application server (in this case tomcat)
  3. Access circleci to get the proper war file
  4. Deploy that war file into the application server

The configuration will looks like this

set -x # enable debug
set -e
# Install basic stack and tools
apt-get update
apt-get install -y -q curl
apt-get install tomcat7 -y -q

# Stop tomcat, to make sure everything starts clean
/etc/init.d/tomcat7 stop
# Get the latest build of circleci

curl -sS -o java-demo.war$APP_VERSION/artifacts/0/home/ubuntu/java-demo/target/java-demo.war
rm -rf /var/lib/tomcat7/webapps/*
cp java-demo.war /var/lib/tomcat7/webapps/ROOT.war

# And restart :)
/etc/init.d/tomcat7 start

Ok, click on "Save & Finalize" now, then click on the "Home" link at the top-left area, and then on the "triggers" tab. And lets create a new trigger, we will call it "ci-trigger", the contents should be this

set -e # abort at any return code != 0


mac instance create -c my-ci -l lon1 -e APP_VERSION=$APP_VERSION

Once you have saved your trigger, you will be returned to the list of all your triggers, click on the "ci-trigger" one and a window like this will be presented to you

Now, that URL that you see is the one that we mentioned on the circle.yml file before, remember?

    version: openjdk7
    - "target/java-demo.war"
    - mvn clean install
    branch: feature/moriano
      - curl \-\-data "APP_VERSION=${CIRCLE_BUILD_NUM}"

Now you will know which one is your trigger hash, so just modify your circle.yml file to reflect the trigger.

Testing everything

Now, every time a circleci build occurrs and it passes all the tests, a trigger will be launched against manageacloud which will create a new linux server containing a tomcat and the very last version of your java code. In addition to that, it is also possible to lick your git account to circleci so that it automatically triggers a build into circleci for every push you do (this can also be done on a per-branch basis).

As an example, I will made a small change into the code, then I will push it and we will see how everything goes.

First, lets see that I have no manageacloud instances running

$ mac instance list
There is no active instances

Now, lets make a git push

$ git push

Counting objects: 64, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (37/37), done.
Writing objects: 100% (51/51), 3.78 KiB | 0 bytes/s, done.
Total 51 (delta 14), reused 0 (delta 0)
   cbf7eab..a7765db  feature/moriano -> feature/moriano

At this point CircleCI will be building the new code and will call Manageacloud to trigger its config

$ mac instance list
| Instance name    | IP  |        Instance ID         |   Type  |       Status      |
|                  |     | d5o6i1fvjqvpd4n7lm7bsrhiab | testing | Creating instance |

And if we wait a bit (1-2 minutes)... we will get our manageacloud instance ready for action!

$ mac instance list
| Instance name  |       IP            |          Instance ID            |   Type  | Status |
|    mct-17e     |      | d5o6i1fvjqvpd4n7lm7bsrhiab      | testing | Ready  |

And that is! this is a simple example of how to unleash the power of continuous delivery with Git + circleci + Manageacloud, note that also here we are creating a new server every time we deploy, meaning that the environment in which the code is deployed is clean, that will get rid forever of the "it works in my machine" problem.

Working safely with the AWS Command Line tool

The AWS Command Line provides all the functionality necessary to script and automate your AWS usage. Just like the browser based Console, detail can be managed and overview visualised. One of the challenges to be faced when moving away from the browser interface however, is immediate feedback and prompting. When running a command, it is very easy to mis-type or use the incorrect detail and end up with a very hefty bill or something much worse.

Of course, as it is with any of us that use the command line, we make notes of the commands and write scripts so we don't have to remember them. In the simplest form, these could be bash functions within a profile script, or individual files sitting within a bin directory within our environment. For some people, this is perfectly fine and suitable. For others, this can be restrictive, non-intuitive, and prone to regression or inability to share and learn.

The simplest way to work safer with the command line is to provide only the necessary and explicit permission to the AWS services that are needed for the specific use-case. When API details are generated through the Browser Console, using the Identity & Access Management tool, specific permission sets are selected from. In the AWS environment, these are known as Policies. Taking care to choose these specifically will help ensure that only services intended to be used can be used. For instance, if you know that DynamoDB will not be interacted with, don't add any of the policy items relating to it. As with all AWS services, these are prefixed with AmazonDynamoDB. It is even possible (and a good idea) to restrict access within specific regions of the AWS zones. If all infrastructure and development will be undertaken within Sydney, don't allow access to any other location zone.

Group functionality exists within the Identity & Access Management tool to make policy crafting simple, and re-usable for many users. Within an organisation, it would be cumbersome and dangerous to manage individual policy sets for many users. By creating a group, assigning specific policies just for the type of group, and then assigning users to those groups, large scale policy sets can be defined and maintained. For instance, a Developer group could be created that would allow access to EC2, DynamoDB, and others as necessary, but not to SQS or SNS for example. This would allow a user within the Developer group to interact with instances in the allowed services whilst preventing their API interaction with others. Development environments could be created, developed on by interacting with shared provisions of other services, and managed in an overall capacity by an Administrator.

When running the command for the first time, it will be noted that access details are required. During the above interaction of the Identity & Access Management tool, when policies are selected and added for a user, the Access Key ID and Secret can be created for this purpose. Multiple users could also be created with different permissions, when different service access is required in sets. Management of certain types of services could be completed with users in one location, and completely different services and management can be completed in an entirely separate location.

One of the easiest ways to learn how to use the AWS Command Line is to simply look at the commands, view their help, and try them out. The browser console will contain the most up-to-date information about all services and instances within them, so it's a great idea to keep a tab open in your browser to view any changes that take place from commands that are executed. If worse comes to worse, and an incorrect command is run or unintended outcomes have taken place, having the console readily open will help with reversing any changes quickly.

Managing scripts to give AWS commands a short-hand is an obvious way to save time and prevent some degree of mis-use. Whilst this is fine for simple items, a lot of flexibility is quickly lost when extra options are required or when requirements change. There is also the need to have shared configurations made available between many users. It would make sense in that case to manage some form of a repository where these configurations could be stored and made available, where collaboration could be fostered to provide the same functionality across a wider team.

Manageacloud provides an even simpler approach, reducing errors and duplication, by working with MacfilesInfrastructures, and Configurations. Server Configurations can be stored to readily create instances of common environments. Each server type can have the complete configuration defined, so when a need arises to have an instance made available, it can be. Infrastructures allow for full delpoyment of multiple configurations and service types. To have a replica of the larger system, an infrastructure can be used and all configurations within made available and pre-configured as a single environment. The Macfile is the structure by which an Infrastructure is defined. Server configurations can be referenced within an Infrastructure using the Roles functionality made available.

With this understanding, AWS commands can then be stored within an infrastructure and run as-needed with minimal direct user input. There is not the same danger of selecting an invalid type or service configuration, as this is not required from the user when the infrastructure is run. Each AWS command can be seen easily in the Macfile content, the Infrastructure is completely transparent.