Docker build caching can lead to insecure images

Docker builds can be slow, so you want to use Docker’s layer caching, reusing previous builds to speed up the current one. And while this will speed up builds, there’s a down-side as well: caching can lead to insecure images.

In this article I’ll cover:

  1. Why caching can mean insecure images.
  2. Bypassing Docker’s build cache.
  3. The process you need in place to keep your images secure.

Note: Outside the specific topic under discussion, the Dockerfiles in this article are not examples of best practices, since the added complexity would obscure the main point of the article.

Learn more about best-practices Docker images for Python.

The problem: caching means no updates

Consider the following Dockerfile (and note this is not a best practices Dockerfile—if you want best practices):

FROM ubuntu:18.04
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y --no-install-recommends python3

COPY myapp.py .
CMD python3 myapp.py

The first time we build it, it will download a variety of Ubuntu packages, which takes a while. The second time we run it, however, docker build uses the cached layers (assuming you ensured the cache is populated):

$ docker build -t myimage .
Sending build context to Docker daemon   2.56kB
Step 1/4 : FROM ubuntu:18.04
 ---> 94e814e2efa8
Step 2/4 : RUN apt-get update &&     apt-get upgrade -y &&     apt-get install -y --no-install-recommends python3
 ---> Using cache
 ---> 3cea2a611763
Step 3/4 : COPY myapp.py .
 ---> Using cache
 ---> f6173b1fa111
Step 4/4 : CMD python3 myapp.py
 ---> Using cache
 ---> 6222b50940a5
Successfully built 6222b50940a5
Successfully tagged myimage:latest

Until you change the text of the second line of the Dockerfile (“apt-get update etc.”), every time you do a build that relies on the cache you’ll get the same Ubuntu packages you installed the first time.

As long as you’re relying on caching, you’ll still get the old, insecure packages distributed in your images even after Ubuntu has released security updates.

Disabling caching

That suggests that sometimes you’re going to want to bypass the caching. You can do so by passing two arguments to docker build:

  1. --pull: This pulls the latest version of the base Docker image, instead of using the locally cached one.
  2. --no-cache: This ensures all additional layers in the Dockerfile get rebuilt from scratch, instead of relying on the layer cache.

If you add those arguments to docker build you will be ensured that the new image has the latest (system-level) packages and security updates.

Rebuild your images regularly

If you want both the benefits of caching, and to get security updates within a reasonable amount of time, you will need two build processes:

  1. The normal image build process that happens whenever you release new code.
  2. Once a week, or every night, rebuild your Docker image from scratch using docker build --pull --no-cache to ensure you have security updates.



You might also enjoy:

» Elegantly activating a virtualenv in a Dockerfile
» Multi-stage Docker builds for Python: virtualenv, –user, and other methods
»» More articles on other topics