The high cost of slow tests

Your test suite runs too slowly, and it’s annoying—but it’s definitely not an emergency. After all, you’ve got a whole slew of bug fixes and feature requests queued up, and those are more important than your test suite. So you can never quite justify spending the time to speed it up.

But while it’s true that your test suite is a means, not an end, a slow test suite can be very expensive, and well worth your time to speed up. To see why, in this article I will go over the some of the costs your slow tests can cause, some obvious, some less so:

  • Wasted developer time.
  • Context switching.
  • Reduced velocity.
  • Bypassed tests.

The obvious cost: wasted time

Let’s say it takes 6 minutes to run your team’s test suite. 6 minutes isn’t that long, right?

If each developer on the team ends up waiting for the test suite twice a day, and the team has 10 developers, that’s 10×6×2=120 developer minutes/day spent waiting. Put another way, that’s 25% of a single developer’s 8-hour workday.

Given developer salaries in the US, and the fact that a feature that takes a month to develop can be sold and resold to tens or hundreds or thousands of customers, we can reasonably assume that a developer can produce $400K of value in software per year.

Combine those two numbers and we get a sense of the cost of time spent waiting twice a day for a 6-minute test suite: hundreds of thousands of dollars a year.

  • $100,000/year for 10 developers.
  • $500,000/year for 50 developers
  • $1,000,000/year for 100 developers.

These aren’t accurate numbers, of course, but they’re in the right range: slow tests add up.

When waiting gets boring: task switching

Beyond a certain point, of course, you aren’t going to sit there staring at you screen waiting for the tests to finish. Instead, you’re going to switch to another task.

Then, when the tests finish (or when you remember to check) you’ll switch back, see if something failed, fix it, rerun the tests, and then switch back to the other text.

All this task switching has a cost: remembering what you were doing, why you were doing it, figuring out where your relevant browser tabs are hiding, and so on. On the level of cognitive abilities, there’s extensive psychological research (summarized here by the American Psychological Association) showing that task switching slows down performance of tasks.

Given the tooling and cognitive complexity of programming, the costs of task switching are quite high, though difficult to measure.

Really slow tests: reduced velocity, bypassed testing

Once your test suite is sufficiently slow, additional costs get added on. Consider for example a test suite that takes 4 hours to run.

First, it becomes quite difficult to merge a feature or bug fix within a single day. The smallest mistake that causes the test suite to fail can push finishing up the task to the next business day—and if you’re not careful, the day after that.

Second, this also makes the task switching problem worse. You need to remember what you were up to yesterday, which is a lot harder than remembering what you were up to half an hour ago.

Third, the reduced velocity can clash with the need to ship things like emergency bug fixes. The slower the test suite, the more tempting it is to bypass it—"just this once"—in order to deploy fixes to customers faster.

And that leads to more bugs, and more emergency deploys.

Slow tests are expensive!

Because your test suite is a critical bottleneck in the development process, impacting your whole team multiple times a day, small delays can quickly add up to big costs.

Don’t put off slow tests as an annoyance: do the math on how much time your team is wasting, and then spend a commensurate amount of time speeding it up. A week’s worth of developer time this month will save you a whole lot more over the course of a year.

You might also enjoy:

» When C extensions crash: easier debugging for your test suite
» Fast tests for slow services: why you should use verified fakes