This is a snapshot of the documentation for the Production-Ready Containers Template product, available for purchase here

Important: The template is proprietary software, licensed under the terms of the software license. The short version: you can’t redistribute it outside your organization, e.g. to customers or in open source projects.

😢 What to do if a feature is missing or you encounter a bug 😢

If you have any questions or problems please email me—but do please first read this document in detail to make sure it’s not already covered.

In addition, while I aim to support many common features, not everything will be supported out of the box in the template. Some options:

  1. Email me and ask about adding it.
  2. Remember that this is a template, not a tool. That means you can, and sometimes should, modify the code however you want to fit your particular needs.

🐋 Try it out: Temporary install 🐋

Before you start Dockerizing your application, you can try it out without touching your code. In the template directory, run:

$ mkdir /tmp/python-docker-test
$ python3 /tmp/python-docker-test
... say 'y' at the prompt ...

You now have a directory with everything needed to build a Docker image. To build an image called exampleapp:

$ cd /tmp/python-docker-test
$ python3 docker-build/scripts/ build exampleapp
Successfully finished building images:

By default two images are built: an intermediate image tagged with :<git-branch>-build where everything is built, and a final, smaller image tagged with :<git-branch>-runtime which is what you’ll actually run in production. In this case, you don’t have a Git branch, so instead it will use “nogit”. To run the resulting image, then, you’ll need to do:

$ docker container run --rm exampleapp:nogit-runtime

(You might need to sudo docker run on Linux.)

If all goes well, you should see the default entrypoint output, telling you to edit docker-build/ and listing the installed packages.

Note: If you get the following problem:

File "docker-build/scripts/", line 2
SyntaxError: Non-ASCII character '\xe2' in file docker-build/scripts/ on line 2, but no encoding declared; see for details

It’s because you used Python 2; make sure you use Python 3.7 or later.

Now it’s time to package your application!

The build process implemented in this template

You might think that a Dockerfile is sufficient to build a good Docker image, but that is not the case.

So this template includes not just a Dockerfile but also the necessary build infrastructure. At a high level, this template builds Docker images as follows:

  1. Build the image.
  2. Push the images to your image registry.
  3. A CI configuration is used to rebuild images weekly.

There are actually two images built: one image used to compile and build your application, and a runtime image which is what you will run in production. This helps keep your runtime images small and more secure, since they don’t include extra—and unnecessary—things like compilers. See this article for details.

Usage, part I: Build an image for your application

1. Install the template

Let’s copy the necessary files in to your application’s repository.

First, create a new branch in your version control.

Second, run the following command in the directory where you unpacked the template:

$ python3 /path/to/your/app/repository/
$ cd /path/to/your/app/repository/

This will give you a list of the installed files. It will overwrite any existing files, which is why you’re doing this in a branch!

Third, commit the initial files to the branch.

By having an unmodified version committed to your repository, you’ll have an easier time seeing what specific customizations you have made, and reverting back to the baseline if necessary.

Beyond this point you’ll be editing files in your repository, not in the original template directory!*

2. Choose the Python version

Files you will need to modify


By default the template using the python Docker base images, pre-configured with a specific Python version. If you want to use a different Python version, you need to change the configuration.

If you want to use a different base image altogether, see the comments in the top of the Dockerfile by the FROM statement.

What you need to do

At the top of the Dockerfile, change:


to the version you want, e.g.:


3. Make sure Python dependencies get installed

Before you begin

The template supports installing Python dependencies based on a number of different configuration mechanisms:

What you need to do

The template assumes these configuration files are in the root directory of your application. If you store these elsewhere, edit the Dockerfile’s relevant COPY file to point to the correct path.

Try it out:

First, you can check which of the two files will be used:

$ python3 docker-build/scripts/ diagnose

Second, build the image, and then check which packages were installed. If you’re running off of a Git repository, replace yourbranch below with the name of the current Git branch:

$ python3 docker-build/scripts/ build exampleapp
Successfully finished building images:
$ docker run exampleapp:yourbranch-build pip list
$ docker run exampleapp:yourbranch-runtime pip list

4. Customize code installation

Files you will need to modify


By default the template assumes you’re running your code out of the directory where it’s installed, with no additional work. Some applications, however, expect that they will be installed via pip install or poetry install, and then run via the installed version.

What you need to do

If your application requires installation, rather than running out of the directory where it’s built, edit the top of Dockerfile and change:


To say:


5. Optional: additional build steps

Files you will need to modify


Sometimes you will need additional build steps in your Dockerfile. For example:

Add those steps in the Dockerfile, specifically where it says:

# If you need to run additional commands to ensure your code runs correctly, run
# them here.
# RUN python

To minimize cache invalidation, try to follow the pattern you already see in the Dockerfile of first copying in just enough of the files to do the next build step. E.g. first copy in your package dependencies list, install those packages, and then in a later step do the actual build.

6. Customize the entrypoint docker-build/

Files you will need to modify


When your Docker image is run (via docker run yourimage or whatever your deployment environment is), it will run docker-build/ So you will need to edit this file to ensure it runs the correct command.

The default script just prints some debug output, so you’ll need to change it to run whatever command or commands your application needs to start. Make sure:

  1. The final command is prefixed with exec so it exits cleanly.
  2. If you’re running a network server that should be accessible outside Docker, listen on interface rather than (see here for explanation).
  3. Logs go to stdout or stderr.

The script include a commented out example of a Gunicorn WSGI setup suitable for web applications.

Try it out:

You should now be able to run your image locally on your own computer, so it’s time to try it out and see if it works. If you’re running off of a Git repository, replace yourbranch below with the name of the current Git branch:

$ python3 docker-build/scripts/ build yourapp
$ docker run --rm yourapp:yourbranch-runtime

If it blows up, you can run the image using a shell as the entrypoint. You can then debug the problem in-place, e.g. try running the custom entrypoint script directly, install new packages with pip (so long as they don’t require a compiler), and in general play around until you’ve figured out the issue.

If you’re running off of a Git repository, replace yourbranch below with the name of the current Git branch.

$ docker run -it yourapp:yourbranch-runtime bash
appuser@25444adb5dc0:~$ which python
appuser@25444adb5dc0:~$ bash /home/appuser/docker-build/ 

Notice that the application’s virtual environment is enabled by default, you don’t need to change anything.

7. Merge to your main branch

At this point you should be building images that work, so you can merge your temporary branch back into your main branch with a pull request or the equivalent.

Usage, part II: CI/CD integration

Now that you have a working image, you probably want to set it up to build automatically whenever someone pushes a change to your version control repository.

1. Push the image

At this point you have the image building locally, and so the next step is to make sure you have the credentials you need to push to an image registry, a server that stores images for you.

  1. Your organization may already have a registry set up, in which case you can use that. Otherwise, if you’re using GitLab you can use GitLab’s built-in registry, if you’re using GitHub you can use GitHub’s registry, or you can sign up for a free trial with Docker Hub or Quay, or set one up in your cloud provider (AWS, GCP, and Azure have them).
  2. Whichever registry you use, it should have instructions for how to login with docker login. Once you have those credentials, run docker login appropriately on your local machine.
  3. You should also figure out what the real name of your image is going to be, rather than the “yourapp” we’ve been using so far.

Let’s assume you’re using Quay, in which case the name will be something like If your chosen name is already in use, test with a different image name, so you don’t break your production images!

Try it out:

Next we run two commands, a build and a push, giving the image name:

$ export  # <-- change appropriately
$ python3 docker-build/scripts/ build $IMAGE_NAME
$ python3 docker-build/scripts/ push $IMAGE_NAME

You can also push manually instead of using push, just make sure you push both images. We want both images pushed so that when you automate builds (see below) you can get fast rebuilds. If you’re running off of a Git repository, replace yourbranch below with the name of the current Git branch:

$ docker image push $IMAGE_NAME:yourbranch-build
$ docker image push $IMAGE_NAME:yourbranch-runtime

You should now see two images listed in the UI for your registry of choice; in this case and You can change runtime and build to some other tag by editing docker-build/ appropriately.

You should also be able to pull the image:

$ docker pull $IMAGE_NAME:yourbranch-runtime

If this worked, the next step is making the above run in your CI/build system.

2. Configure your build system

There are a number of situations where you likely want Docker images to be built automatically:

  1. In response to pull/merge requests, for use in automated tests and to make sure it builds.
  2. When you tag a revision in Git, on the presumption it’s a release of some sort.
  3. Whenever you merge to your main branch or equivalent.
  4. Weekly or daily, rebuild the image from scratch, to ensure the latest security updates.

See this article for a detailed explanation of why you want the latter.

If you’re using GitHub Actions

Copy .github/workflows/docker.yml into your own repository’s .github/workflows/ directory.

By default this uses GitHub’s Container Registry. You can change that to any other container registry by editing the fields for the docker/login-action: the username, password, and registry fields.

You will also need to customize the environment variable called BUILD_IMAGE_NAME in the docker.yml to match complete image name you’re building, including the registry, e.g.

By default the main/master branch and pull-requests to that branch have images built, as well as tags; if you don’t want that, you’ll need to edit docker.yml appropriately.

If you’re using GitLab CI

Copy the configuration from .gitlab-ci.yml into your own configuration.

By default GitLab’s built-in Docker image registry is used, but if you want to push to another registry you can change the configuration appropriately.

IMPORTANT: The included configuration will reuse cached layers indefinitely, which means you will not get security updates (see this article). To fix that, you will need to manually set up a weekly build that rebuilds without caching:

  1. Manually add a scheduled pipeline that runs this pipeline once a week (or once a day, or whatever interval you want).
  2. Set an environment variable (in the “Variables” section of the new scheduled pipeline) to ensure the build is done without caching: EXTRA_BUILD_ARGS should be set to --no-cache.

For instructions on setting up scheduled pipelines see the documentation.

By default the main/master branch and pull-requests to that branch have images built, as well as tags; if you don’t want that, you’ll need to edit .gitlab-ci.yml appropriately.

If you’re using something else

If you have some other build or CI system, you will need to run a build and push script manually in your CI system.

For example, if your image is, you want a build script that looks something like this:

set -euo pipefail  # bash strict mode
python3 docker-build/scripts/ build $BUILD_IMAGE_NAME
python3 docker-build/scripts/ push $BUILD_IMAGE_NAME

To rebuild images from scratch without caching (which you should do weekly or daily to get security updates) you can run docker-build/scripts/ build --no-cache yourimagename.

Try it out

Create a pull request to your main branch, and see if a Docker image gets built and pushed automatically.

Usage, part III: Robust builds, smaller images

Additional configuration will allow you to make sure you don’t push broken images, and that your images don’t include unnecessary files.

1. Add a smoke test to catch broken images

Before new images get pushed to the registry, it’s useful to run minimal test to ensure the new image is not completely broken. For example, a webserver can be tested by a sending a simple HTTP query to a running container.

It’s worth while implementing such a test, and then adding it to your CI config right before the command that pushes to the registry. For more details see this article on Docker image smoke tests.

2. Customize .dockerignore and COPY so you don’t package unnecessary files

Files you might need to modify


.dockerignore lists files that shouldn’t be copied into the Docker image. If you have any large data files in the repository, or secrets, or any other files that shouldn’t be copied into the Docker image, add them here.

The file format is documented here.

Additionally, you can edit the Dockerfile so instead of just copying everything in the current directory (as filtered through .dockerignore) it only copies the files you need. For example, if you only need the yourpackage/ and data/ directories, you can change the “COPY . .” line to "COPY yourpackage/ data/ ./.

Try it out

dive is a tool for figuring out what’s in your image, and where it’s coming from.

docker-show-context is another useful utility that lists which large files made it into the Docker context. You can use it to figure out if there are any large files being copied in.

Additional configuration

Maximum image size

The build will check the size of the runtime image, and complain if it’s too large. This will help catch unexpectedly large images. To change the configured maximum, edit MAX_IMAGE_SIZE_MB in docker-build/

Installing development dependencies

If you’re doing local development, you might want to build the image locally and have it install development dependencies like black or flake8. You can do so by using the --dev option; for example, this will build a Docker image called yourapp with development dependencies installed:

$ ./docker-build/scripts/ build --dev yourapp

The template supports this in three ways:

  1. If you’re using Poetry, it will install Poetry-configured development dependencies.
  2. If you’re using Pipenv, it will install Pipenv-configured development dependencies.
  3. If you are usually using requirements.txt to install dependencies, you can provide a dev-requirements.txt.

If you’re building manually, you can do:

$ docker build --build-arg INSTALL_DEV_DEPENDENCIES=1 .

And if you’re using Docker Compose, your docker-compose.yaml can do:

version: "3.9"
    context: .

Custom labels

By default images will be labeled with the current Git branch or tag, and the current git commit. You can see this metadata by running:

$ docker image inspect yourimage:yourtag | grep git

If you want to add more labels, add them to EXTRA_CLI_BUILD_ARGS in docker-build/

Custom Docker tagging

By default, the Docker images are tagged based on the current Git branch or tag. If you’d like to customize this behavior, you’ll want to modify the various relevant options in docker-build/

Build secrets and other additional arguments to docker build

If you want to add additional arguments to docker build (see the CLI docs), you can add them via the EXTRA_CLI_BUILD_ARGS list in For example, if you want to run docker build with --secret id=MYSECRET,src=secret.txt you can do:


Docker Compose

The Dockerfile should work out of the box with Compose, with the caveat that if you use EXTRA_CLI_BUILD_ARGS in the config, you will need to add those arguments to the docker-compose.yml as well.

If you want to have Compose build the image, you can have the following docker-compose.yml:

version: "3.9"
      context: .

Note that docker-compose up doesn’t automatically rebuild the image when the code changes; you’ll need to do docker-compose up --build.

Also note that you may get faster builds on Linux if you enable BuildKit (on macOS and Windows Buildkit is enabled by default):


BuildKit support

By default the template uses BuildKit, which results in faster builds and supports additional features like build secrets.

Other recommendations

Pin your Python dependencies

Every application really requires two different sets of dependency descriptions:

  1. The logical, direct dependencies. For example, “this needs at least Flask 1.0 to run”.
  2. The complete set of dependencies, including transitive dependencies, pinned to particular versions. Transitive means dependencies-of-dependencies, and pinning means particular versions. For example, this might be “Flask==1.0.3, itsdangerous==1.1.0, werkzeug==0.15.4, click==7.0, jinja2==2.10.1, markupsafe==1.1.1”.

The first set of dependencies can be used to easily update the second set of dependencies when you want to upgrade (e.g. to get security updates).

The second set of dependencies is what you should use to build the application, in order to get reproducible builds: that is, to ensure each build will have the exact same dependencies installed as the previous build.

Some tools that do this are pipenv and poetry, but the easiest way to do that is with pip-tools. Since pip install dependencies based on your current operating system, I’ve written a little script that runs pip-tools inside Docker so you get Linux-specific dependencies.

Make sure you rebuild the images once a week, redeploying if necessary

Because of the use of caching, system packages won’t get security updates by default. This is why the default CI configuration above makes sure to rebuild the image from scratch, without any caching, once a week. Note that on GitLab CI this requires some manual setup.

Make sure you have set this up, otherwise you will eventually end up with insecure images.

You will then want to deploy these updates images to your production environment.

Python dependencies needs to be updated regularly for security reasons

If you are using pinned dependencies, you will need an ongoing process to re-pin to new versions in order to get security updates and critical bugfixes.

GitHub can automatically notify you of security updates and can also update your dependencies in general.

There are also third-party services like, PyUp, and others that will scan your dependencies for vulnerabilities.

Runtime health checks

You can and should define health checks for a Docker image—a way for whatever system is running the container to check if the application is functioning correctly. The Docker image format itself supports defining health checks, however some systems like Kubernetes ignore these and have their own way of specifying these.

So check the documentation for the systems where you run your images, and add health checks.

Licensing and distribution

The template is licensed using the attached license. Essentially, you can’t distribute the template code to any other organization, and you may only be able to package a limited number of services depending on your purchase. The only exceptions are the and, which you can distribute as you wish (the latter is based on open source code, see the file for details).

What you can do:

Found a bug? Have a feature request?

If you have any questions or problems please email me.



To upgrade, copy in the updated docker-build/scripts/ script into your repository.


Compared to version 1.x, the configuration has been simplified and is more template-like. In addition:

Upgrading from 1.0 is probably easiest by just by starting from scratch. If the 1.0 template is working for your application, there is however no pressing need to upgrade.