Write only the tests that you need

We write tests to help us rather than making our lives more difficult. However, unit testing and Test-Driven Development have often been advocated religiously – “you must do it 100% or you’re doing it wrong”! But that is not true.

If you look at the unit tests you’ve written the last year, how many of them have actually helped you? How many of them has caught a regression? How many helped a new programmer understand the code its testing? How many would still work if you refactored the code under test? I could go on. Unfortunately, in all likeliness, many of the tests are a waste of time and should be removed!

Probably the greatest misunderstanding regarding Test-Driven Development is that people focus on the “Test” part, when the latter words “Driven Development” are much more important. TDD is meant to help us drive the development of our system forward through tests, not to produce a fool proof set of tests. Of course, there are cases where we want to ensure that some complex code works as expected, but that is simply testing, not test-driven development. If you want to verify that some code works, by all means, write unit tests for it. If you want to use Test-Driven Development to help you develop your systems, write the tests that help you do that.

Especially, I think there are many cases where one test too many is unnecessary, or even harmful. Here are a few examples of such situations.

  • The cost if the code breaks is very low. In some situations, a bug doesn’t cause very much problems. Most people doesn’t write unit tests for their shell scripts, for example.
  • There is something more important to do. If a customer can’t even purchase your product, it doesn’t matter how good the unit tests for canceling an order are.
  • The code is too simple to reasonably fail. In many cases, the code is very unlikely to ever fail. The extremely occasional failure will most likely cost much less than writing and maintaining unit tests for it.
  • Setting up the test requires too much work. If you depend on something complex which is hard to fake, don’t waste your time writing that low level test. You’ll most likely fake the dependency incorrectly anyway. Leave it to a higher-level test.
  • The unit under test is a small private helper class. When the class you’re looking at is just a small private helper class for some bigger public class which performs some real business functionality, you probably don’t need to test the helper separately.

In the examples above, the tests does not help us drive the development of our system forward. Instead, they slow us down. So focus your energy on writing tests that help you  drive your development forward. Write tests for new features, for learning, for things you expect to break, for important edge cases, and for bugs that you fix. Beyond that, don’t write more tests.

I like the answer from Kent Beck on a “how much to test” question which sums this whole topic up rather nicely.

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it. I do tend to make sense of test errors, so I’m extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

Leave a Reply

Your email address will not be published. Required fields are marked *