NAV Navbar


Quilt is an approachable way to use JavaScript to deploy and manage software running on cloud infrastructure. Quilt can be used to deploy anything from website backends to complex distributed systems.

We hope you will find this documentation to be a helpful guide to using Quilt. If you run into any issues using Quilt, don’t hesitate to contact us. If you notice issues in the documentation, please submit a Github issue, or free free to fix it and submit a pull request!

Getting Started

This section explains how to install Quilt, and also serves as a brief, hands-on introduction to some Quilt basics.

Install Node.js

Quilt blueprints are written in Node.js. Only Node version v7.10.0 has been tested.

Installation instructions for various operating systems are available here.

Install Go

Quilt supports Go version 1.5 or later.

Find Go using your package manager or on the Golang website.


We recommend reading the overview to Go workplaces here.

Before installing Quilt, you’ll need to set up your GOPATH. Assuming the root of your Go workspace will be $HOME/gowork, execute the following export commands in your terminal to set up your GOPATH.

export GOPATH="$HOME/gowork"
export PATH="$PATH:$GOPATH/bin"

It would be a good idea to add these commands to your .bashrc so that they do not have to be run again.

Download and Install Quilt

Clone the repository into your Go workspace: go get

This command also automatically installs Quilt. If the installation was successful, then the quilt command should execute successfully in your shell.

Configure A Cloud Provider

Quilt currently supports Amazon EC2, Digital Ocean, and Google Compute Engine; support for running locally with Vagrant is currently experimental. Refer to the relevant section below to setup the cloud provider that you would like to use. Contact us if you’re interested in a different cloud provider.

Amazon EC2

For Amazon EC2, you’ll first need to create an account with Amazon Web Services and then find your access credentials. That done, you simply need to populate the file ~/.aws/credentials, with your Amazon credentials:

aws_access_key_id = <YOUR_ID>
aws_secret_access_key = <YOUR_SECRET_KEY>


To deploy a DigitalOcean droplet in the sfo1 zone of size 512mb as a Worker:

deployment.deploy(new Machine({
  provider: "DigitalOcean",
  region: "sfo1",
  size: "512mb",
  role: "Worker"


  1. Create a new key here. Both read and write permissions are required.

  2. Save the key in ~/.digitalocean/key on the machine that will be running the Quilt daemon.

Floating IPs

To assign a floating IP to a machine, simply specify the IP as an attribute. For example,

deployment.deploy(new Machine({
  provider: "DigitalOcean",
  region: "sfo1",
  size: "512mb",
  floatingIp: "",
  role: "Worker"

Creating a floating IP is slightly unintuitive. Unless there are already droplets running, the floating IP tab under “Networking” doesn’t allow users to create floating IPs. However, this link can be used to reserve IPs for a specific datacenter. If that link breaks, floating IPs can always be created by creating a droplet, then assigning it a new floating IP. The floating IP will still be reserved for use after disassociating it.

Note that DigitalOcean charges a fee of $.0006/hr for floating IPs that have been reserved, but are not associated with a droplet.

Google Compute Engine

Quilt supports the Google provider for booting instances on the Google Compute Engine. For example, to deploy a GCE machine in the us-east1-b zone of size n1-standard-1 as a Worker:

deployment.deploy(new Machine({
  provider: "Google",
  region: "us-east1-b",
  size: "n1-standard-1",
  role: "Worker"


  1. Create a Google Cloud Platform Project: All instances are booted under a Cloud Platform project. To setup a project for use with Quilt, go to the console page, then click the project dropdown at the top of page, and hit the plus icon. Pick a name, and create your project.

  2. Enable the Compute API: Select your newly created project from the project selector at the top of the console page, and then select API Manager -> Library from the navbar on the left. Search for and enable the Google Compute Engine API.

  3. Save the Credentials File: Go to Credentials on the left navbar (under API Manager), and create credentials for a Service account key. Create a new service account with the Project -> Editor role, and select the JSON output option. Copy the downloaded file to ~/.gce/quilt.json on the machine from which you will be running the Quilt daemon.

That’s it! You should now be able to boot machines on the Google provider.

Your First Quilt-managed Infrastructure

We suggest you read quilt/nginx/main.js to understand the infrastructure defined by this Quilt.js blueprint.

Acquire the Nginx Blueprint

In order to run the Nginx blueprint, we’ll have to download it first. We’ll simply clone it:

git clone
cd nginx

Install Blueprint Dependencies

The Nginx blueprint depends on the @quilt/quilt module. More complicated blueprints may have other dependencies that would get pulled in as well. To install all dependencies, run npm install ..

Configure quilt/nginx/main.js

Set Up Your SSH Authentication

Quilt-managed Machines use public key authentication to control SSH access. SSH authentication is configured with the sshKeys Machine attribute. Currently, the easiest way to set up your SSH access, is by using the githubKeys() function. Given your GitHub username, the function grabs your public keys from GitHub, so they can be used to configure SSH authentication. If you can access GitHub repositories through SSH, then you can also SSH into a githubKey-configured Machine.

If you would like to use githubKey authentication, open main.js, import the githubKeys function from @quilt/quilt, and set the sshKeys appropriately.

const {createDeployment, Machine, githubKeys} = require('@quilt/quilt');
var baseMachine = new Machine({
    sshKeys: githubKeys("CHANGE_ME"),

Deploying quilt/nginx/main.js

In one shell, start the Quilt daemon with quilt daemon. In another shell, execute quilt run ./main.js. Quilt will set up several Ubuntu VMs on your cloud provider as Workers, and these Workers will host Nginx Docker containers as specified in quilt/nginx/app.js (you do not have to understand or edit this file).

Accessing the Worker VM

It will take a while for the VMs to boot up, for Quilt to configure the network, and for Docker containers to be initialized. When a machine is marked Connected in the console output, the corresponding VM is fully booted and has begun communicating with Quilt.

The public IP of the Worker VM can be deduced from the console output. The following output shows the Worker VM’s public IP to be

INFO [Nov 11 13:23:10.266] db.Machine:
    Machine-2{Master, Amazon us-west-1 m4.large, sir-3sngfxdh, PublicIP=, PrivateIP=, Disk=32GB, Connected}
    Machine-4{Worker, Amazon us-west-1 m4.large, sir-19bid86g, PublicIP=, PrivateIP=, Disk=32GB, Connected}

Run ssh quilt@<WORKER_PUBLIC_IP> to access a privileged shell on the Worker VM.

Inspecting Docker Containers on the Worker VM

You can run docker ps to list the containers running on your Worker VM.

quilt@ip-172-31-0-87:~$ docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS               NAMES
a2ac27cfd313   "/usr/local/bin/etcd "   11 minutes ago      Up 11 minutes                           etcd
0f407bd0d5c4        quilt/ovs                    "run ovs-vswitchd"       11 minutes ago      Up 11 minutes                           ovs-vswitchd
7b65a447fe54        quilt/ovs                    "run ovsdb-server"       11 minutes ago      Up 11 minutes                           ovsdb-server
deb4f98db8eb        quilt/quilt:latest           "quilt minion"           11 minutes ago      Up 11 minutes                           minion

Any docker containers defined in a blueprint are placed on one of your Worker VMs. In addition to these user-defined containers, Quilt also places several support containers on each VM. Among these support containers is minion, which locally manages Docker and allows Quilt VMs to talk to each other and your local computer.

Loading the Nginx Webpage

By default, Quilt-managed containers are disconnected from the public internet and isolated from one another. In order to make the Nginx container accessible from the public internet, quilt/nginx/app.js explicitly opens port 80 on the Nginx container to the outside world:

publicInternet.connect(port, webTier);

From your browser via http://<WORKER_PUBLIC_IP>, or on the command-line via curl <WORKER_PUBLIC_IP>, you can load the Nginx welcome page served by your Quilt cluster.

Cleaning up

If you’d like to destroy the infrastructure you just deployed, you can either modify the blueprint to remove all of the Machines, or use the command, quilt stop. Both options will cause Quilt to destroy all of the Machines in the deployment.

Next Steps: Writing your own Quilt Blueprint

This guide illustrated how to use Quilt to run an application that already had a blueprint written. Next, you can try writing your own Quilt blueprints for new applications that don’t yet have blueprints written; to do that, check out the guide to writing Quilt blueprints.

Blueprint Writers Guide

This guide describes how to write the Quilt blueprint for a new application, using the application as an example. is an open source project that implements a reddit-like web page, where users can post content and vote up or down other content.

Decomposing the application into containers

The first question you should ask yourself is “how should this application be decomposed into different containers?” If you’ve already figured this out for your application (e.g., if you’re copying from a Kubernetes setup that already has Dockerfiles defined), you can skip the rest of this section.

A very brief introduction to containers

You can think of a container as being like a process: as a coarse rule-of-thumb, anything that you’d launch as its own process should have it’s own container with Quilt. While containers are lightweight (like processes), they each have their own environment (including their own filesystem and their own software installed) and are isolated from other containers running on the same machine (unlike processes). If you’ve never used containers before, we suggest starting with the Docker getting started guide.

Specifying the containers for your application

As an example of how to specify the containers for your application, let’s use the example. requires mysql to run, so we’ll use one container for mysql. We’ll use a second container for the program to run in.

For each container that your application uses, you’ll need a container image. The container image describes the filesystem that will be on the container when it’s started. For mysql, for exampe, the container image includes all of the dependencies that mysql needs to run, so that after starting a new mysql container, you can simply launch mysql (no more installation is needed). Most popular applications already have containers that you can use, and a quick google search yields an existing mysql image that we can use for

For the container that runs, we’ll need to create a new image by writing our own Dockerfile, which describes how the Docker image should be created. In this case, the Dockerfile is relatively simple:

# This container is based on the Ruby image, which means that it
# automatically inherits the Ruby installation defined in that image.
FROM ruby:2.3.1

# Install NodeJS, which is required by
RUN apt-get update && apt-get install nodejs -y

# Download and build the code.
RUN git clone git://
WORKDIR lobsters
RUN bundle

# Add a file to the container that contains startup code for This
# command assumes that is in the same directory as this
# Dockerfile.
COPY /lobsters/

# When the container starts, it should run the server using the
# bash file that we copied above.  This is a common
# "gotcha" to people new to containers: unlike VMs, each container is based
# on a process (in this case, rails, which is started at the end of
# and will be shutdown when that process stops.
ENTRYPOINT ["/bin/sh", "/lobsters/"]

In this case, we wrote an additional bash script,, to help start the application. The important thing about that script is that it does some setup that needed to be done after the container was started, so it couldn’t be done in the Dockerfile. For example, the first piece of setup it does it to initialize the SQL database. Because that requires a connection to mysql, it needs to be done after the container is launched (and configured to access the mysql container, as discussed below). After initializing the database, the script launches the rails server, which is the main process run by the container.

To create a docker image using this file, run docker build in the directory with the Dockerfile (don’t forget the period at the end!):

$ docker build -t kayousterhout/lobsters .

In this case, we called the resulting image kayousterhout/lobsters, because we’ll push it to the Dockerhub for kayousterhout; you’ll want to use your own Dockerhub id to name your images.

This will take a few minutes, and creates a new image with the name kayousterhout/lobsters. If you want to play around with the new container, you can use Docker to launch it locally:

$ docker run -n lobsters-test kayousterhout/lobsters

To use a shell on your new container to poke around (while the rails server is running), use:

$ docker exec -it lobsters-test /bin/bash

This can be helpful for making sure everything was installed and is running as expected (although in this case, won’t work when you start it with Docker, because it’s not yet connected to a mysql container).

Deploying the containers with Quilt

So far we have a mysql container image (we’re using an existing one hosted on Dockerhub) and a container image that we just made. You should similarly have the containers ready for your application. Up until now, we haven’t done anything Quilt-specific: if you were using another container management service like Kubernetes, you would have had to create the container images like we did above. These containers aren’t yet configured to communicate with each other, which is what we’ll set up with Quilt. We’ll also use Quilt to descrbie the machines to launch for the containers to run on.

To run the containers for your application with Quilt, you’ll need to write a Quilt blueprint. Quilt blueprints are written in Javascript, and the Quilt Javascript API is described here. In this guide, we’ll walk through how to write a Quilt blueprint for, but the Quilt API has more functionality than we could describe here. See the API guide for more usage information.

Writing the Quilt blueprint for MySQL

First, let’s write the Quilt blueprint to get the MySQL container up and running. We need to create a container based on the mysql image:

var sqlContainer = new Container("mysql:5.6.32");

Here, the argument to Container is the name of an image. You can also pass in a Dockerfile to use to create a new image, as described in the Javascript API documentation.

Next, the SQL container requires some environment variables to be set. In particular, we need to specify a root password for SQL. We can set the root password to foo with the setEnv function:

sqlContainer.setEnv("MYSQL_ROOT_PASSWORD", "foo");

All containers need to be part of a service in order to be executed. In this case, the service just has our single mysql container. Each service is created using a name and a list of containers:

var sqlService = new Service("sql", [sqlContainer]);

The SQL service is now initialized.

Writing the Quilt blueprint for

Next, we can similarly initialize the lobsters service. The lobsters service is a little trickier to initialize because it requires an environment variable (DATABASE_URL) to be set to the URL of the SQL container. Quilt containers are each assigned unique hostnames when they’re initialized, so we can create the lobsters container and initialize the URL as follows:

var lobstersContainer = new Container("kayousterhout/lobsters"); var
sqlDatabaseUrl = "mysql2://root:" + mysqlOpts.rootPassword + "@" +
sqlService.hostname() + ":3306/lobsters";
lobstersContainer.setEnv("DATABASE_URL", sqlDatabaseUrl); var
lobstersService = new Service("lobsters", [lobstersContainer]);
Allowing network connections

At this point, we’ve written code to create a mysql service and a lobsters service. With Quilt, by default, all network connections are blocked. To allow lobsters to talk to mysql, we need to explicitly open the mysql port (3306):

lobstersService.connect(3306, sqlService);

Because lobsters is a web application, the relevant port should also be open to the public internet on the lobsters service. Quilt has a publicInternet variable that can be used to connect services to any IP address:

publicInternet.connect(3000, lobstersService);
Deploying the application on infrastructure

Finally, we’ll use Quilt to launch some machines, and then start our services on those machines. First, we’ll define a “base machine.” We’ll deploy a few machines, and creating the base machine is a useful way to create one machine that all of the machines in our deployment will be based off of. In this case, the base machine will be an Amazon instance that allows ssh access from the public key “bar”:

var baseMachine = new Machine({provider: "Amazon", sshKeys: ["ssh-rsa bar"]});

Now, using that base machine, we can deploy a master and a worker machine. All quilt deployments must have one master, which keeps track of state for all of the machines in the cluster, and 0 or more workers. To deploy machines and services, you must create a deployment object, which maintains state about the deployment.

var deployment = createDeployment();

We’ve now defined a deployment with a master and worker machine. Let’s finally deploy the two services on that infrastructure:

deployment.deploy(sqlService); deployment.deploy(lobstersService);

We’re done! Running the blueprint is now trivial. With a quilt daemon running, run your new blueprint (which, in this case, is called lobsters.js):

quilt run lobsters.js

Now users of lobsters, for example, can deploy it without needing to worry about the details of how different services are connected with each other. All they need to do is to quilt run the existing blueprint.

Quilt.js API Documentation

This section documents use of the Quilt JavaScript library, which is used to write blueprints.


The Container object represents a container to be deployed.

Specifying the Image

The first argument of the Container constructor is the image that container should run.

If a string is supplied, the image at that repository is used.


Instead of supplying a link to a pre-built image, Quilt also support building images in the cluster. When specifying a Dockerfile to be built, an Image object must be passed to the Container constructor.

For example,

new Container(new Image("my-image-name",
  "FROM nginx\n" +
  "RUN cd /web_root && git clone"

would deploy an image called my-image-name built on top of the nginx image, with the repository cloned into /web_root.

If the Dockerfile is saved as a file, it can simply be read in:

new Container(new Image("my-image-name", read("./Dockerfile")))

If a user runs a blueprint that uses a custom image, then runs another blueprint that changes the contents of that image’s Dockerfile, the image is re-built and all containers referencing that Dockerfile are restarted with the new image.

If multiple containers specify the same Dockerfile, the same image is reused for all containers.

If two images with the same name but different Dockerfiles are referenced, an error is thrown.


Container.filepathToContent defines text files to be installed on the container before the container starts. Both the key and value are strings.

For example,

  "/etc/myconf": "foo"

would create a file at /etc/myconf containing the text foo.

new Container("haproxy").withFiles({
  "/etc/myconf": "foo"

would create a haproxy instance with a text file /etc/myconf containing foo.

If the files change after the container boots, Quilt does not restart the container. However, if the file content specified by filepathToContent changes, Quilt will destroy the old container and boot a new one with the proper files.

The files are installed with permissions 0644. Parent directories are automatically created.


Container.hostname gets the container’s hostname. If the container has no hostname, an error is thrown.


Container.setHostname gives the container a hostname at which the container can be reached.

If multiple containers have the same hostname, an error is thrown during the vetting process.


The Machine object represents a machine to be deployed.

Its attributes are:


Quilt.js has some basic support for reading files from the local filesystem which can be used either as Dockerfiles for container images, or imported directly into a container at boot. These utilities should be considered experimental and are likely subject to change.


read() reads the contents of a file into a string. The file path is passed in as an argument to the function. For example, in the below example, contents will contain a string representing the contents of the file located at /path/to/file.txt.

var contents = read("/path/to/file.txt")


readDir() lists the contents of a directory. It takes the file path of a directory as its only argument, and returns a list of objects representing files in that directory. Each object contains the fields name (the name of the file), and isDir (true if the path is a directory instead of a file). For example, in the walk() function below, readDir() is used to recursively execute a callback on every file in a directory.

function walk(path, fn) {
        var files = readDir(path);
        for (var i = 0; i < files.length; i++) {
                var filePath = path + "/" + files[i].name;
                if (files[i].isDir) {
                        walk(filePath, fn);
                } else {

Developing Quilt

Developer Setup

The project is written in Go and therefore follows the standard Go workspaces project style. The first step is to create a go workspace as suggested in the documentation.

We currently require go version 1.3 or later. Ubuntu 15.10 uses this version by default, so you should just be able to apt-get install golang to get started.

Checkout the source code:

git clone $GOPATH/src/

Once this is done you can install the AWS API and various other dependencies automatically:

go get

And finally to build the project run:

go install

Or alternatively just “go install” if you’re in the repo.

Build Tools

To do things beyond basic build and install, several additional build tools are required. These can be installed with the make go-get target.


If you change any of the proto files, you’ll need to regenerate the protobuf code. We currently use protoc v3. On a Mac with homebrew, you can install protoc v3 using:

brew install --devel protobuf

On other operating systems you can directly download the protoc binary here, and then add it to your $PATH.

You’ll also need to install protobuf go bindings:

go get -u{proto,protoc-gen-go}

To generate the protobufs simply call:

make generate


We use govendor for dependency management. If you are using Go 1.5 make sure GO15VENDOREXPERIMENT is set to 1.

To add a new dependency:

  1. Run go get foo/bar
  2. Edit your code to import foo/bar
  3. Run govendor add +external

To update a dependency:

govendor update +vendor

Developing the Minion

Whenever you develop code in minion, make sure you run your personal minion image, and not the default Quilt minion image. To do that, follow these steps:

  1. Create a new empty repository on your favorite registry - docker hub for example.
  2. Modify quiltImage in cloudcfg.go to point to your repo.
  3. Modify Version in version.go to be “latest”. This ensures that you will be using the most recent version of the minion image that you are pushing up to your registry.
  4. Create a .mk file (for example: to override variables defined in Makefile. Set REPO to your own repository (for example: REPO = sample_repo) inside the .mk file you created.
  5. Create the docker image: make docker-build-quilt
    • Docker for Mac and Windows is in beta. See the docs for install instructions.
  6. Sign in to your image registry using docker login.
  7. Push your image: make docker-push-quilt.

After the above setup, you’re good to go - just remember to build and push your image first, whenever you want to run the minion with your latest changes.

Contributing Code

We highly encourage contributions to Quilt from the Open Source community! Everything from fixing spelling errors to major contributions to the architecture is welcome. If you’d like to contribute but don’t know where to get started, feel free to reach out to us for some guidance.

The project is organized using a hybrid of the Github and Linux Kernel development workflows. Changes are submitted using the Github Pull Request System and, after appropriate review, fast-forwarded into master. See Submitting Patches for details.

Coding Style

The coding style is as defined by the gofmt tool: whatever transformations it makes on a piece of code are considered, by definition, the correct style. In addition, golint, go vet, and go test should pass without warning on all changes. An easy way to check these requirements is to run make lint check on each patch before submitting a pull request.

Unlike official go style, in Quilt lines should be wrapped to 89 characters.

The fundamental unit of work in the Quilt project is the git commit. Each commit should be a coherent whole that implements one idea completely and correctly. No commits should break the code, even if they “fix it” later. Commit messages should be wrapped to 80 characters and begin with a title of the form <Area>: <Title>. The title should be capitalized, but not end with a period. For example, provider: Move the provider interfaces into the cluster directory is a good title. When possible, the title should fit in 50 characters.

All but the most trivial of commits should have a brief paragraph below the title (separated by an empty line), explaining the context of the commit. Why the patch was written, what problem it solves, why the approach was taken, what the future implications of the patch are, etc.

Commits should have proper author attribution, with the full name of the commit author, capitalized properly, with their email at the time of authorship. Commits authored by more than one person should have a Co-Authored-By: tag at the end of the commit message.

Submitting Patches

Patches are submitted for inclusion in Quilt using a Github Pull Request.

A pull request is a collection of well formed commits that tie together in some theme, usually the larger goal they’re trying to achieve. Completely unrelated patches should be included in separate pull requests.

Pull requests are reviewed by one person: either by a committer, if the code was submitted by a non-committer, or by a non-committer otherwise. You do not need to choose a reviewer yourself; quilt-bot will randomly select a reviewer from the appropriate group. Once the reviewer has approved the pull request, a committer will merge it.

Once the patch has been approved by the first reviewer, quilt-bot will assign a committer to do a second (sometimes cursory) review. The committer will either merge the patch, provide feedback, or if a great deal of work is still needed, punt the patch back to the original reviewer.

It should be noted that the code review assignment is just a suggestion. If a another contributor, or member of the public for that matter, happens to do a detailed review and provide a +1 then the assigned reviewer is relieved of their responsibility. If you’re not the assigned reviewer, but would like to do the code review, it may be polite to comment in the PR to that effect so the assigned reviewer knows they need not review the patch.

We expect patches to go through multiple rounds of code review, each involving multiple changes to the code. After each round of review, the original author is expected to update the pull request with appropriate changes. These changes should be incorporated into the patches in their most logical places. I.E. they should be folded into the original patches or, if appropriate inserted as a new patch in the series. Changes should not be simply tacked on to the end of the series as tweaks to be squashed in later – at all stages the PRs should be ready to merge without reorganizing commits.

The Quilt Daemon

Two processes need to be running for blueprints to be enforced: quilt daemon and quilt run. quilt daemon does the heavy lifting – it’s responsible for enforcing blueprints. quilt run is responsible for compiling blueprints and sending them to the daemon to be enforced.

Code Structure

Quilt is structured around a central database (db) that stores information about the current state of the system. This information is used both by the global controller (Quilt Global) that runs locally on your machine, and by the minion containers on the remote machines.


Quilt uses the basic db database implemented in db.go. This database supports insertions, deletions, transactions, triggers and querying.

The db holds the tables defined in table.go, and each table is simply a collection of rows. Each row is in turn an instance of one of the types defined in the db directory - e.g. Cluster or Machine. Note that a table holds instances of exactly one type. For instance, in ClusterTable, each row is an instance of Cluster; in ConnectionTable, each row is an instance of Connection, and so on. Because of this structure, a given row can only appear in exactly one table, and the developer therefore performs insertions, deletions and transactions on the db, rather than on specific tables. Because there is only one possible table for any given row, this is safe.

The canonical way to query the database is by calling a SelectFromX function on the db. There is a SelectFromX function for each type X that is stored in the database. For instance, to query for Connections in the ConnectionTable, one should use SelectFromConnection.

Quilt Global

The first thing that happens when Quilt starts is that your blueprint is parsed by Quilt’s JavaScript library, quilt.js. quilt.js then puts the connection and container specifications into a sensible format and forwards them to the engine.

The engine is responsible for keeping the db updated so it always reflects the desired state of the system. It does so by computing a diff of the config and the current state stored in the database. After identifying the differences, engine determines the least disruptive way to update the database to the correct state, and then performs these updates. Notice that the engine only updates the database, not the actual remote system - cluster takes care of that.

The cluster takes care of making the state of your system equal to the state of the database. cluster continuously checks for updates to the database, and whenever the state changes, cluster boots or terminates VMs in you system to reflect the changes in the db.

Now that VMs are running, the minion container will take care of starting the necessary system containers on its host VM. The foreman acts like the middle man between your locally run Quilt Global, and the minion on the VMs. Namely, the foreman configures the minion, notifies it of its (the minion‘s) role, and passes it the policies from Quilt Global.

All of these steps are done continuously so the blueprint, database and remote system always agree on the state of the system.

Quilt Remote

As described above, cluster is responsible for booting VMs. On boot, each VM runs docker and a minion. The VM is furthermore assigned a role - either worker or master - which determines what tasks it will carry out. The master minion is responsible for control related tasks, whereas the worker VMs do “the actual work” - that is, they run containers. When the user specifies a new container the config file, the scheduler will choose a worker VM to boot this container on. The minion on the chosen VM is then notified, and will boot the new container on its host. The minion is similarly responsible for tearing down containers on its host VM.

While it is possible to boot multiple master VMs, there is only one effective master at any given time. The remaining master VMs simply perform as backups in case the leading master fails.