Reducing CO₂ emissions with faster software
What can you as a software developer do to fight climate change? My first and primary answer is getting involved with local politics. However, if you write software that operates at sufficient scale, you can also reduce carbon emissions by making your software faster.
In this article we’ll cover:
- Why more computation uses more electricity.
- Why you probably don’t need to think about this most of the time.
- Reducing emissions by reducing compute time.
- Reducing emissions with parallelism (even with the same amount of compute time!).
- Some alternative scenarios and caveats: embodied emissions and Jevons Paradox.
More computation, more emissions
We can start with an extremely simplified model tying computation to carbon emissions:
- Electricity in most places around the world is generated at least in part in ways that generate CO₂.
- When your CPU core isn’t running instructions, it will (in most configurations) be automatically switched to a mode where it uses less power. Conversely, when a CPU core on your computer runs CPU instructions, it uses more electricity.
- The increased electricity usage results in more CO₂ emissions. For example, if you look at a day in the past on the the dashboard for my local energy grid (the graphs for the current day are confusing if you don’t read them carefully), system load correlates with emissions.
We can see step 2 in action on x86-64 Linux using the perf
tool, as among other measurements it can report on at least part of your computer’s power usage.
You will need to either run perf
as root, or run sudo sysctl -w kernel.perf_event_paranoid=-1
to enable non-root users to run perf
.
First, I measured the power usage of sleeping for 5 seconds, essentially doing no computation. Since this measures power usage for the whole computer, I shut down as many other programs as I could in advance:
$ perf stat -e power/energy-pkg/ -- sleep 5
Performance counter stats for 'system wide':
7.12 Joules power/energy-pkg/
5.001717453 seconds time elapsed
Next, I created a little Python program that does lots of work for 5 seconds:
import time
start = time.time()
while time.time() - start < 5:
sum(range(1_000_000))
Then I measured power usage while this program was running:
$ perf stat -e power/energy-pkg/ -- python spin.py
Performance counter stats for 'system wide':
138.91 Joules power/energy-pkg/
5.027640326 seconds time elapsed
More computation, more electricity usage.
Your software’s electricity usage may not matter
As context, kilowatt-hours are used for electricity billing in the US at least, and each kWh is equivalent to 3.6 million Joule.
So if I ran spin.py
for an hour instead of 5 seconds, that would use (130/5×3600)/3600000 = 0.026
kWh.
I live in Massachusetts, where in 2023 there were 0.42 kilograms of CO₂ emissions per kWh of electricity generated.
Here’s how different activities compare in terms of CO₂ emissions (the numbers are all approximate, I’ve seen numbers ranging from 4 to 15 for beef):
Activity | kg of CO₂ (equivalent) |
---|---|
Eating 100 grams of beef | 6.00 |
Driving one mile in a typical US car | 0.40 |
Running spin.py for an hour |
0.01 |
This suggests that if you’re writing software that isn’t widely used, making it faster isn’t particularly impactful activity from a climate perspective, even if it’s a generally useful improvement. So if you want to take individual action outside of political engagement, you’re better off switching from beef to chicken (or better yet, beans), or swapping your car trip for an e-bike ride.
On the other hand, consider NumPy, which had 354 million downloads from PyPI last month. Just the computation needed to download all those files adds up, so smaller package sizes might also be helpful. And NumPy software runs on a huge number CPUs, often for extended periods of time. At this scale, even very small speedups in NumPy can result in significant reduction in electricity usage.
Reducing computation reduces electricity usage
Assuming your software is running at scale, the next question is how to reduce electricity usage.
We’ll start by focusing on computing on a single CPU core. Reducing how much computation your code does will make it use less electricity. The emphasis on computation is because, as we saw above, merely sleeping or waiting won’t use electricity: an idle CPU is an efficient CPU.
We can see this in action in the various Rust Mandelbrot implementations I covered in some other articles.
We can compare the slower scalar implementation with the faster SIMD implementation, using the RAYON_NUM_THREADS=1
environment variable to run on a single core only:
$ env RAYON_NUM_THREADS=1 perf stat -e power/energy-pkg/ -- \
./mandelbrot 10000 10000 --algo scalar > /dev/null
Performance counter stats for 'system wide':
69.81 Joules power/energy-pkg/
4.366147988 seconds time elapsed
$ env RAYON_NUM_THREADS=1 perf stat -e power/energy-pkg/ -- \
./mandelbrot 10000 10000 --algo simd > /dev/null
Performance counter stats for 'system wide':
19.83 Joules power/energy-pkg/
0.942975730 seconds time elapsed
Faster computation means reduced electricity usage.
Parallelism doesn’t reduce computation… but it does reduce power usage!
Running our code with multiple threads on multiple cores doesn’t reduce how much computation is done.
We can see this with the time
utility, that records both wall-clock time and CPU time:
$ env RAYON_NUM_THREADS=1 time -p \
./mandelbrot 10000 10000 --algo scalar > /dev/null
real 4.35
user 4.25
sys 0.09
$ time -p ./mandelbrot 10000 10000 --algo scalar > /dev/null
real 0.41
user 6.07
sys 0.17
Summarizing this in a table:
Run | CPU seconds (user+sys) | Wallclock time |
---|---|---|
Single core | 4.34 | 4.35 |
Multi core | 6.24 | 0.41 |
The parallel run actually did more computation, i.e. using more CPU seconds, than the single-core run. But it finished 10× faster since my CPU has many cores.
Given it uses more computation, we might expect the parallel version to use more power. In fact, the opposite is true, and parallelism reduces power usage despite the additional computation. I ran both scalar and SIMD implementations with parallelism, and here’s the overall results:
Parallelism? | Backend | Elapsed (secs) | Joules |
---|---|---|---|
No | Scalar | 4.38 | 65 |
Yes | Scalar | 0.33 | 31 |
No | SIMD | 0.95 | 21 |
Yes | SIMD | 0.10 | 12 |
What this suggests is that we can get reduced electricity usage in two different and additive ways:
- Reducing computation by writing more efficient code.
- Utilizing multiple cores by adding parallelism.
This matches the results of the paper It’s Not Easy Being Green: On the Energy Efficiency of Programming Languages, which saw a non-linear relationship between number of cores and power usage, and concluded that “on [their] experimental platform, aggressively parallelizing programs is nearly always an energy-efficient choice.” And of course reduced computation and parallelism also give you two different and additive ways to get results faster!
Other considerations
As we move towards a world of green energy, it may be that you know that producing the electricity you are using isn’t emitting much CO₂. There is still the carbon cost of creating the computers you’re using, so-called “embodied” emissions. Again, this is probably more of an issue of scale, but when you reach that point, the goal is to maximize how much use you can get out of a computer before it has to be replaced. If you’re operating on a computer cluster, reducing computation time can allow running more jobs on the cluster, thus maximizing its utility.
There’s also the often-raised critique of efficiency known as Jevons Paradox, the idea that more efficiency results in increased usage. While this appears to be the case for computer hardware, I am less convinced that Jevons Paradox necessarily applies to software.
Takeaways
In many cases, the software you write just doesn’t use enough power to matter. If it does, you can reduce emissions by:
- Making it more efficient in terms of computation.
- Utilizing parallelism to run it on multiple cores.
And of course, if you care about climate change you should consider getting involved in local politics.