All Pythons are slow, but some are faster than others

Python is not the fastest language around, so any performance boost helps, especially if you’re running at scale. It turns out that depending where you install Python from, its performance can vary quite a bit: choosing the wrong version of Python can cut your speed by 10-20%.

Let’s look at some numbers.

Comparing builds Python

I ran three benchmarks from the pyperformance suite on four different builds of Python 3.9 (code is here):

  1. python:3.9-buster, the “official” Python Docker image.
  2. Ubuntu 20.04, via the ubuntu:20.04 Docker image.
  3. Anaconda Python on ubuntu:20.04.
  4. Conda-Forge on ubuntu:20.04.

If you’re not familiar with Conda, it’s a packaging system which includes precompiled libraries of pretty much all libraries and executables (including Python), other than the standard C library. The company that created Conda, Anaconda, provides a package channel, and there is a community project called Conda-Forge that provides Python and thousands of other packages.

All of the benchmark runs were inside a Docker container, on Fedora 33, with an Intel Xeon CPU E3-1226 v3 @ 3.30GHz. Docker seems to have a surprisingly high performance hit on Fedora, but that ought to be equalized across runs of the same benchmarks, and I got similar results in previous benchmarking runs using Podman (which has lower overhead but mysteriously broke over the weekend).

Here are the results with mean and stddev; each run was done 10 times, but I also did multiple runs with similar results. Lower is better, since we’re measuring elapsed time. The Conda-Forge Python is fastest, followed by Ubuntu 20.04; Docker’s official Python image is the slowest.

Python build 2to3 django_template unpickle_pure_python
Conda-Forge 491ms ± 3ms 78ms ± 0.8ms 514us ± 5us
Ubuntu 20.04 512ms ± 3ms 80ms ± 0.7ms 537us ± 7us
Anaconda 523ms ± 5ms 86ms ± 2.3ms 550us ± 3us
python3.9-buster 543ms ± 3ms 92ms ± 0.6ms 590us ± 12 us

Why the differences?

It turns out that compiling Python for maximum performance is actually quite tricky: it involves profiler-guided optimizations, where runs from real code are used to guide the compiler, and a variety of knobs you can tweak.

One knob is whether the core of the Python implementation is in a shared library, or in the python executable itself. The shared library version tends to be rather slower.

Python build python links to libpython.so?
Conda-Forge No
Ubuntu 20.04 No
Anaconda No
python:3.9-buster Yes

Fedora and RHEL also use the shared library version of Python. However they—and in the future, Python 3.10 by default—use the -fno-semantic-interposition option to speed things up. The “official” python Docker image doesn’t use this option yet. So that explains at least part of why the official Docker image is so much slower.

That isn’t enough to explain all these differences in performance, however, nor the differences between the other builds. Other options might include how and whether they do profiler-guided optimization, other compiler flags, compiler versions, glibc differences, and more.

My hope is that the various organization doing Python builds continue to learn from each other, and perhaps even set up a cross-build performance comparison. The baseline performance of Python could be a lot better than it is today in almost all installed environments.

Takeaways: be careful which Python you choose

I was surprised at how much performance variation there was between different builds of Python. And there are many other builds I haven’t tested, like the deadsnakes PPA that provides additional Python versions for Ubuntu, the Python in RedHat Enterprise Linux (which I didn’t test since they don’t have 3.9), and more.

For now, if you’re picking a version of Python to use, the Ubuntu 20.04 and especially the Conda-Forge versions appear to be noticeably faster. But in general, if performance matters to you, you’ll want to run some benchmarks.