All Pythons are slow, but some are faster than others
Python is not the fastest language around, so any performance boost helps, especially if you’re running at scale. It turns out that depending where you install Python from, its performance can vary quite a bit: choosing the wrong version of Python can cut your speed by 10-20%.
Let’s look at some numbers.
Comparing builds Python (February 2021)
python:3.9-buster, the “official” Python Docker image.
- Ubuntu 20.04, via the
- Anaconda Python on
- Conda-Forge on
If you’re not familiar with Conda, it’s a packaging system which includes precompiled libraries of pretty much all libraries and executables (including Python), other than the standard C library. The company that created Conda, Anaconda, provides a package channel, and there is a community project called Conda-Forge that provides Python and thousands of other packages.
All of the benchmark runs were inside a Docker container, on Fedora 33, with an Intel Xeon CPU E3-1226 v3 @ 3.30GHz. Docker seems to have a surprisingly high performance hit on Fedora, but that ought to be equalized across runs of the same benchmarks, and I got similar results in previous benchmarking runs using Podman (which has lower overhead but mysteriously broke over the weekend).
Here are the results with mean and stddev; each run was done 10 times, but I also did multiple runs with similar results. Lower is better, since we’re measuring elapsed time. The Conda-Forge Python is fastest, followed by Ubuntu 20.04; Docker’s official Python image is the slowest.
|Conda-Forge||491ms ± 3ms||78ms ± 0.8ms||514us ± 5us|
|Ubuntu 20.04||512ms ± 3ms||80ms ± 0.7ms||537us ± 7us|
|Anaconda||523ms ± 5ms||86ms ± 2.3ms||550us ± 3us|
||543ms ± 3ms||92ms ± 0.6ms||590us ± 12 us|
Why the differences?
It turns out that compiling Python for maximum performance is actually quite tricky: it involves profiler-guided optimizations, where runs from real code are used to guide the compiler, and a variety of knobs you can tweak.
One knob is whether the core of the Python implementation is in a shared library, or in the
python executable itself.
The shared library version tends to be rather slower.
Fedora and RHEL also use the shared library version of Python.
However they—and in the future, Python 3.10 by default—use the
-fno-semantic-interposition option to speed things up.
python Docker image doesn’t use this option yet.
So that explains at least part of why the official Docker image is so much slower.
That isn’t enough to explain all these differences in performance, however, nor the differences between the other builds. Other options might include how and whether they do profiler-guided optimization, other compiler flags, compiler versions, glibc differences, and more.
My hope is that the various organization doing Python builds continue to learn from each other, and perhaps even set up a cross-build performance comparison. The baseline performance of Python could be a lot better than it is today in almost all installed environments.
Another pass: Python 3.10 (May 2022)
Python 3.10 has different optimization settings, and there are now newer releases of Debian (Bullseye has superceded Buster) and Ubuntu (22.04) with newer compilers. So I decided to rerun these tests with newer images. These results may not be comparable in absolute terms to 3.9 numbers; more useful is comparing them to each other. Again, lower is better:
|Conda-Forge||479ms ± 35ms||55.6ms ± 1.0ms||367us ± 5us|
|Ubuntu 22.04||486ms ± 29ms||55.6ms ± 1.4ms||379us ± 5us|
||471ms ± 32ms||55.4ms ± 1.7ms||358us ± 4us|
With Python 3.10, I am not seeing any meaningful difference between the different Python builds.
Takeaways: be careful which Python you choose
I was surprised at how much performance variation there was between different builds of Python 3.9.
Perhaps this was a benchmarking failure on my part, but they are configured differently.
And there are many other builds I haven’t tested, like the
deadsnakes PPA that provides additional Python versions for Ubuntu, the Python in RedHat Enterprise Linux, and more.
For now, if you’re picking a version of Python to use, the Ubuntu 20.04 and especially the Conda-Forge versions appear to be noticeably faster if you’re using versions older than 3.10. For Python 3.10, the test variants all seem equivalent. More broadly, if performance matters to you, you’ll want to run some benchmarks not just on your code, but on the lower layers like Python that you rely on.
Data processing too slowly? Cloud compute bill too high?
You can get faster results from your data science pipeline—and get some money back too—if you can just figure out why your code is running slowly.
Identify performance bottlenecks and memory hogs in your production data science Python jobs with Sciagraph, the always-on profiler for production batch jobs.
Learn practical Python software engineering skills you can use at your job
Sign up for my newsletter, and join over 6500 Python developers and data scientists learning practical tools and techniques, from Python performance to Docker packaging, with a free new article in your inbox every week.