Avoiding insecure images from Docker build caching

Docker builds can be slow, so you want to use Docker’s layer caching, reusing previous builds to speed up the current one. And while this will speed up builds, there’s a down-side as well: caching can lead to insecure images.

In this article I’ll cover:

  1. Why caching can mean insecure images.
  2. Bypassing Docker’s build cache.
  3. The process you need in place to keep your images secure.

Note: Outside any specific best practice being demonstrated, the Dockerfiles in this article are not examples of best practices, since the added complexity would obscure the main point of the article.

Python on Docker Production Handbook Need to ship quickly, and don’t have time to figure out every detail on your own? Read the concise, action-oriented Python on Docker Production Handbook.

The problem: caching means no updates

I’m going to assume here that you’re using a stable base image, which means package updates are purely focused on security fixes and severe bug fixes. So as a first pass we can assume you actually want these updates to happen on regular basis, because they’re both important and unlikely to break your code.

Consider the following Dockerfile:

FROM ubuntu:18.04
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y --no-install-recommends python3

COPY myapp.py .
CMD python3 myapp.py

The first time we build it, it will download a variety of Ubuntu packages, which takes a while. The second time we run it, however, docker build uses the cached layers (assuming you ensured the cache is populated):

$ docker build -t myimage .
Sending build context to Docker daemon   2.56kB
Step 1/4 : FROM ubuntu:18.04
 ---> 94e814e2efa8
Step 2/4 : RUN apt-get update &&     apt-get upgrade -y &&     apt-get install -y --no-install-recommends python3
 ---> Using cache
 ---> 3cea2a611763
Step 3/4 : COPY myapp.py .
 ---> Using cache
 ---> f6173b1fa111
Step 4/4 : CMD python3 myapp.py
 ---> Using cache
 ---> 6222b50940a5
Successfully built 6222b50940a5
Successfully tagged myimage:latest

Until you change the text of the second line of the Dockerfile (“apt-get update etc.”), every time you do a build that relies on the cache you’ll get the same Ubuntu packages you installed the first time.

As long as you’re relying on caching, you’ll still get the old, insecure packages distributed in your images even after Ubuntu has released security updates.

Disabling caching

That suggests that sometimes you’re going to want to bypass the caching. You can do so by passing two arguments to docker build:

  1. --pull: This pulls the latest version of the base Docker image, instead of using the locally cached one.
  2. --no-cache: This ensures all additional layers in the Dockerfile get rebuilt from scratch, instead of relying on the layer cache.

If you add those arguments to docker build you will be ensured that the new image has the latest (system-level) packages and security updates.

Rebuild your images regularly

If you want both the benefits of caching, and to get security updates within a reasonable amount of time, you will need two build processes:

  1. The normal image build process that happens whenever you release new code.
  2. Once a week, or every night, rebuild your Docker image from scratch using docker build --pull --no-cache to ensure you have security updates.

This is just one solution, though; see this article for other approaches to fixing this problem.