Convert guard clauses to value objects

Consider this method which registers a visitor to an event.

public class Event {
    public void registerVisitor(String name, String phoneNumber) {
        if (name == null || name.trim().isEmpty()) 
             throw new IllegalArgumentException("Name was empty");
        if (!PHONE_NUMBER_PATTERN.matcher(phoneNumber).matches())
             throw new IllegalArgumentException("Number invalid");
        // Do actual registration...
    }
}

As a good member of society, it uses guard clauses to verify its pre conditions, because:

  • It protects the method from sloppy callers.
  • It communicates what we expect and makes for a crude form of documenting pre conditions.
  • It helps us discovering bugs early and makes sure we do not fill our events with bogus visitors.
  • In fact, even security may depend on this, as we can use guard clauses to protect us from various forms of injection attacks.

That’s all good.

Duplication is the root of all evil

At least everything would have been good if the above example would have been the only place were we had to use these guard clauses. But of course, there are other methods through which these name and phoneNumber values pass. Each of these other methods also want to verify their pre conditions and make sure they are properly called. The solution? Simple, we add guard clauses to these methods as well!

That’s were things start to go south.

Rather quickly, we live in a soup of guard clauses as we duplicate them over and over again.

Well, we’ve got a solution for that, don’t we? After all, we know the DRY principle and have even memorized the Extract Method short cut. We’ll simply extract each guard clause to a separate method and reuse these methods where they are needed. (And we’ll just put them in that “util” class which we’ve got laying around anyway.) Problem solved!

public class Event {
    public void registerVisitor(String name, String phoneNumber) {
        Util.verifyNameNotEmpty(name);
        Util.verifyPhoneNumberIsValid(phoneNumber);
        // Do actual registration...
    }
}
public class Util {
    // Remember to use these wherever names or phone numbers
    // needs to be validated!
    public static void verifyNameNotEmpty(String name) {
        if (name == null || name.trim().isEmpty()) 
            throw new IllegalArgumentException("Name was empty");
    }
    public static void verifyPhoneNumberIsValid(String phoneNumber) {
        if (!PHONE_NUMBER_PATTERN.matcher(phoneNumber).matches())
             throw new IllegalArgumentException("Number invalid");
    }
}

Problem solved? Well, not quite.

Sure, we got rid of some of the duplication – but not all. We still have to invoke these guard clause methods. So we still have the validation spread out everywhere. And that is assuming we remember to add them at all! Subtle security bugs may arise because we forgot to add a single guard clause in a single place.

That’s not very good.

Thankfully, we can do much better.

Understanding the problem

To get there, we’ll start by thinking about the code that we are duplicating. Why do we have to put the “name not empty” guard clause all over the place? What does the places where we add it have in common? The answer is quite simple; they all have a name parameter.

The pain we’re feeling is actually a symptom of that we’ve separated logic from data. Because after all, where would be a more obvious place for the name validation logic than together with the name?

Unfortunately, the name is currently just floating around anonymously in a String, so we cannot move the method to the data. (The static guard methods are a sign of that.) But we can solve that!

Value objects to the rescue

The solution is to introduce a value object Name which can hold both the data and the validation logic (and the same for PhoneNumber). It would look like this.

public class Event {
    public void registerVisitor(Name name, PhoneNumber phoneNumber) {
        // Do actual registration...
    }
}
public class Name {
    public Name(String name) {
        if (name == null || name.trim().isEmpty()) 
            throw new IllegalArgumentException("Name was empty");
        this.name = name;
    }
    //...
}
public class PhoneNumber {
    public PhoneNumber(String phoneNumber) {
        if (!PHONE_NUMBER_PATTERN.matcher(phoneNumber).matches())
             throw new IllegalArgumentException("Number invalid");
        this.phoneNumber = phoneNumber;
    }
    //...
}

Now we’re in a much better place!

  • The registerVisitor() method reads much cleaner. It is absolutely clear what the indata is, and we can go straight to understanding the actual domain logic without being distracted by guard clauses.
  • Each guard clause has a single place to call home. It no longer needs to be spread out all over the system, nor does it have to live in some “util” class. It can live together with its data, as it should.
    • This point becomes even more important if the validation logic is more complex than in this rather simple example.
  • You do not have to remember validating the incoming data. You know that the data you get is sound. Save for the use of black magic (reflection), it is simply impossible to create a Name or PhoneNumber object in an invalid state.
  • The new value objects are excellent places to put any logic regarding these concepts that might show up in the future.
    • And for “usual suspects” such as toString(), equals(), hashCode(), and compare().
  • You can control whether and in what ways the object can be mutated. In this case, we used an already immutable String, but in other scenarios the underlying data structure may very well be mutable.
  • We’ve made previously implicit concepts explicit. No longer is the meaning of “name” and “phone number” some fuzzy things that we all think we share common definitions of. Instead, they are explicitly defined concepts, clear and self-validating.

The only possible drawback I can think of is generating more objects which might occupy more memory. However, for every case where you haven’t got a profiler in front of me telling me it’s a real world problem that particular situation, I’ll argue it’s worth it.

What do you say?

Thanks to Daniel Deogun whose talk on writing secure code triggered the writing of this post.

See tests as living documentation

Tests are often treated as write-once-never-look-at-again code (unless just maybe if there is a bug). That is a shame, because it wastes much of the potential value tests can provide. Good tests are helpful for understanding a piece of code, and making code easier to understand is important since code is harder to read than to write.

To help make code easier to understand, I suggest that you should start looking at your tests as a living documentation of your code. View them as something that your colleagues will look at to understand the code. See them as something that you will look at to understand the code (later).

Focus on communication and intent

Why? Well, obviously, because they hopefully help you understand the code quicker. But it goes deeper than that – I believe it helps you write better implementation code. The reasons is that it helps you focus on communication and intent, rather than technical details. Because after all, the most important audience of code is not computers, but humans. Computers understand any code that is correct, no matter how it is structured. The same cannot be said for humans. You turn computer-focused tests into human-focused documentation.

Behavior-Driven Development go with this theme to turn tests into a collaboration tool between software developers and business experts.

Seeing tests as stories is an idea I learned from Kent Beck, JUnit’s creator, who said:

Writing a test is really comes down to telling a story about the code. Having that mindset helps you work out many other problems during testing.

What documentation-focused tests look like

A few examples of how viewing tests as documentation can affect your test writing.

  • Your start using longer, more descriptive method names.
  • You name test methods not only after the method they are testing, but describes what to expect.
  • You use multiple test classes for the same implementation class, separating groups of related tests. (These groups are called “fixtures” and are how JUnit was designed and intended to be used.)
  • You start sorting test methods in a way that makes them easy to grasp.
  • You accept somewhat higher duplication in your tests because readability of tests is so important.
  • You avoid abstract super classes for test classes as they make individual test classes harder to read.
  • You write tests before implementation code to help you focus on the purpose of the code.
  • You realize that the tests needed to drive design in Test-Driven Development are not necessarily the tests needed to show the intent of the code.
  • You understand that deleting tests comes down to story telling – which tests are needed to tell the story about this code.
  • You don’t obsess over 100% test coverage since you get less in return for each test you add.

Caveat: Of course, there is also a technical or coverage aspect of unit testing, but in most cases it is of less importance. The primary audience of code is humans, not computers.

StoryTeller to the rescue

The Related Tests view that StoryTeller provides.

If you want some help with the mind shift, that is where StoryTeller comes in. It is an Eclipse plugin that uses your unit tests as living documentation.

It helps you by displaying relevant tests side-by-side with your code, and displaying test names as normal sentences rather than method names.

By always having relevant tests available, you can use the tests as a quick reference of the code. You also get reminded about the tests regularly, so you can see if they accurately describe the code. If not, it might be time to add, modify, or remove a few tests.

StoryTeller also uses normal sentences rather than method names. WhileCamelCaseIsVeryUseful, plain old text is just easier to read. Using normal sentences also helps you think of the tests as specification or documentation, not just methods.

If you use Eclipse, go ahead and give StoryTeller a spin. You just might like it!

How to write robust tests

Many unit tests are  brittle. As soon as the code is changed, the test breaks and has to be changed too. We don’t want that. We want robust tests.

A robust test is a test which does not have to change when the code it is testing is changed as long the change preserves the intended functionality.

A robust test does not break when the code is refactored. You don’t have to remove or change a robust unit test when you fix a bug. You just add a new one to cover the bug.

If you want to start writing more robust tests, here are a few things you can consider.

  • Test on a slightly higher level. Tests on a lower level often have to be removed or rewritten completely because there is much volatility in low-level class design. They require more significant changes when a large refactoring comes around, while higher-level classes tend to get by with smaller changes.
  • Choose which classes to test. Not every class needs its own test class. Especially, consider not writing separate tests for small private helper classes which are tightly coupled with a larger public class. If a certain class is very complex, selectively target that class with tests even though you don’t give its less complex sibling classes the same treatment.
  • Don’t fake or mock too much. Tests that fake or mock too much become less robust because they know too much about how the unit performs its work. If the unit finds another way to do the same work, the test will fail.
  • Focus on the important functionality. A robust test verifies functionality rather than implementation. It is focused on the parts of the unit’s interface which are truly important while it ignores the parts of the unit’s interface (or internals!) that should be allowed to change. Put differently, it knows the difference between “intentional” and “accidental” functionality.
  • Test in the language of the domain. By expressing your tests in the language of the domain, i.e. using concepts relevant to your business or application, you naturally create tests which depends on the wanted functionality, but not on too many implementation details.

Robust tests lead to “functionality unit” pattern

All of these guidelines together favor a certain type of design pattern. We can call it “functionality unit”. It means that any piece of (non-trivial) low level functionality is performed by a primary class, optionally supported by a few secondary helper classes. The primary class is often the only publicly visible one and acts as a façade for the functionality performed by the secondary classes. The tests focus their efforts on the primary class and seldom tests the helper classes individually, unless there is a special reason such as high algorithmic complexity. They are expressed in the language of the business functionality the primary class is supposed to perform.

Designing and testing in this way makes robust unit tests possible because it:

  • Focuses on a level low enough to unit test effectively while high enough to be reasonably stable.
  • Doesn’t require mocking since unit tests see the helper classes as internals of the primary class.
  • Focuses on functionality performed by the primary class rather than the secondary ones.
  • Creates tests which “make sense” because they are expressed in domain language.

Let us look at an example

In this example the functionality in question is to parse a certain type of document. We have a primary class Parser which is quite big. It has over 1000 lines of code and is rather hard to understand so we decide to split it up. The good part is that it is well unit tested with multiple test classes testing from different angles. To make the code clearer we figure out that extracting the two secondary classes Foo and Bar would be a good idea. It looks like this.

Depending on how you structure your test, they may be more or less robust.

Depending on how you structure your test, they may be more or less robust.

The question then becomes, what do we do with the tests?

First, we should note that the existing tests help us making the refactoring safely. They will (hopefully) break if we actually change the functionality of the Parser class. But what about after the refactoring? Should we keep the tests as they are or should we split them up into separate unit tests for each class? As always, the answer is “it depends”.

The alternative to the left represents keeping the tests more or less as they are. We save time by reusing existing tests. We test in the language of the domain. We avoid mocking because ParserTest doesn’t try to isolate Parser from Foo or Bar. To the right we have the other alternative where we rewrite most of the tests to test each individual class. This also has benefits. We follow the very straight-forward and intuitive pattern of having one test class per implementation class. Problems in the Foo or Bar classes might be even simpler to find with focused tests.

However, regarding robustness, we can ask one very important question. In which of the two alternatives would the tests survive a major implementation code refactoring? Say that we merge Bar back into Parser, or split Foo into Apple and Banana. Such a scenario would require much work with the tests in the right-hand alternative, while most likely none at all in the left-hand alternative. This is a major strength of the left-hand alternative, as well as the “functionality unit” pattern outlined above. By sometimes viewing a small group of highly related classes as a a unit rather than an individual class, we get more robust tests.

Moving logic and data together [example]

I’ve talked about extracting logic and moving it elsewhere and I want to give an example of how doing so can improve the code. By necessity it is a rather short example, but hopefully the idea will be clear.

Lets say we have a method called activateContract() which will activate something called a Contract for a User. Understanding exactly how it works is not necessary for this example, but the code is complex enough that we have decided it deserves its own class. The method we’re going to work with at looks like this.

public void activateContract(Contract contract, User user) {
    int requiredStreamCount = 0;
    for (Service service : contract.getServices()) {
        if (service.isEnabled()) {
            requiredStreamCount += service.getRequiredStreamCount();
        }
    }
    if (requiredStreamCount > user.getAvailableStreamCount()) {
        throw new ActivationFailedException();
    }
    // Perform actual activation...
}

Let’s go through this code.

It seems that it is a method to activate a contract for a user. Before it starts with the actual activation (which I’ve ignored here), it seems to verify something which has to do with counting streams. It is not required to know what that means, but by looking at the code we can figure out that there are there is a contract to be activated for some user, that contracts consists of services, that services require a number of streams, and that the intent of the first few lines is to ensure the user has enough streams to cover all services on the contract to be activated.

The problems

However, there are some problems with this code.

  1. The purpose of the code is not immediately clear. The unfamiliar reader will have to look through the code in some detail to understand what the purpose of the code is. The code is not expressed in concepts relevant to activating a contract, but is focused on implementation details such as iterating and summing. In this example, the code is quite clear and it doesn’t take too long to figure out what the intent is, but it is easy to imagine a more complex example.
  2. Knowledge is spread out over the system. If any other programmer needs to perform this check, she gets no help and most likely ends up reimplementing this logic. With a bit of bad luck, she forgets to check if the services is “enabled” and thereby creates a bug.
  3. It is more procedural than than object oriented. The code does not make good use of object orientation and is very self centered – it is all “me”, “me”, “me”. “Give me the information I need so I can do what I want!” It would be much better of asking other objects to do some of the work for it.

Extracting method to clarify intent

So how can we improve this situation? Well, to fix the first problem and help clarify the intent of the code, we can create a separate method for the validation. We use the IDE’s refactoring Extract method to do the work for us. That would look like this.

public void activateContract(Contract contract, User user) {
    validateUserStreamCount(contract, user);
    // Perform actual activation...
}
private void validateUserStreamCount(Contract contract, User user) {
    int requiredStreamCount = 0;
    for (Service service : contract.getServices()) {
        if (service.isEnabled()) {
            requiredStreamCount += service.getRequiredStreamCount();
        }
    }
    if (requiredStreamCount > user.getAvailableStreamCount()) {
        throw new ActivationFailedException();
    }
}

That is better. Now we have moved unnecessary details about validation away from the method which is supposed to perform activation. The code is still is very procedural, and although we’ve taken one step towards a reusable method, we’re not quite there yet.

Extracting method again to prepare for reuse

So I keep looking for ways to split this code up. One thing that makes this method less than ideal for reuse is that it trows an exception – another class might want to handle the situation differently. Therefore I would like to get the logic of the method away from throwing the exception. I extract another method.

private void validateUserStreamCount(Contract contract, User user) {
    if (!canBeActivatedFor(contract, user)) {
        throw new ActivationFailedException();
    }
}
private boolean canBeActivatedFor(Contract contract, User user) {
    int requiredStreamCount = 0;
    for (Service service : contract.getServices()) {
        if (service.isEnabled()) {
            requiredStreamCount += service.getRequiredStreamCount();
        }
    }
    return requiredStreamCount <= user.getAvailableStreamCount();
}

The new method takes care of the calculation and comparison, while the old method still makes the decision about what to do when the validation fails. Because I prefer my methods to have positive (in the boolean sense) names, I switched the boolean logic in the comparisons and negated the result of the function.

That is definitely one step in the right direction. This method, if we made it public, could actually be reused by other classes. The current class might not be the best place for such a method however, and we still haven’t dealt with the procedural nature of the code.

Move method to where it belongs

What we could do and what is often done, is making canBeActivatedFor() static and moving it to some kind of utility class. This solves the second problem since it is now easily reusable. This is how we would do it in C or Basic. Such a utility method is not always very easy to find though, and it does not solve problem three as it is still not taking advantage of object orientation.

Thankfully, the fix is rather simple. If we look at canBeActivatedFor() we see that it mostly deals with information related to Contract. In fact, it does not have anything to do with the class it currently lives in. Therefore, the most natural suggestion is to move it to Contract. Most IDEs have a Move method refactoring for this.

public void activateContract(Contract contract, User user) {
    validateUserStreamCount(contract, user);
    // Perform actual activation...
}
private void validateUserStreamCount(Contract contract, User user) {
    if (!contract.canBeActivatedFor(user)) {
        throw new ActivationFailedException();
    }
}
public class Contract {
    // ...
    public boolean canBeActivatedFor(User user) {
        int requiredStreamCount = 0;
        for (Service service : getServices()) {
            if (service.isEnabled()) {
                requiredStreamCount += service.getRequiredStreamCount();
            }
        }
        return requiredStreamCount <= user.getAvailableStreamCount();
    }
}

The result

Finally! The code is starting to look quite nice!

Depending on how the rest of the code looks, we might be able to deprecate or even remove the getServices() method. If we wanted, we could provide the canBeActivatedFor() with just the user’s “available stream count” instead of sending a User object. We could also further separate the counting from the comparison by extracting yet another method from canBeActivatedFor(), especially if that computation is needed elsewhere.

In any case, we have successfully dealt with the three problems identified with the original code and reached a number of benefits.

  • The code now reads much more clearly and the intent of the code is easy to understand.
  • The stream counting logic now lives where it conceptually belongs. It is simple to reuse if we need the it elsewhere. It is easy to discover since your IDE’s auto completion functionality will suggest it when you use a Contract object.
  • We are closer to coding in the language of the domain with method names that describe what they do, while the implementation details are hidden inside.
  • We make use of object oriented features and place logic and data together in smart objects.
  • Depending on the rest of the code, we might be able to completely hide the Services that are stored inside Contract.

There are more things that can be improved about the code above, but I definitely think we’ve moved in the right direction with the steps we’ve taken. Feel free to add comments on any suggestions or questions on the above.

Don’t test private methods

A common question when it comes to unit testing is:

How do I test private methods?

There are in fact a number of possible ways to do this.

  • Create focused tests for public methods which are customized to exercise the private method we’re interested in, even though it is hard and creates difficult-to-understand tests.
  • Use a tool that allows you to test private methods, such as PowerMock or Java’s reflection API, even though your tests become tightly coupled to the implementation.
  • Increase the visibility to default or protected and call it from a normal unit test in the same package or through a subclass, even though you expose the class’ internal workings.

As you might imagine from the descriptions above, I believe that these strategies in most cases are wrong (even though they all work and could be useful once in a blue moon).

You get less maintainable code

Testing should generally test behavior rather than implementation – what the result is rather than how it is done. A private method is by definition an implementation detail. It should be up to the implementor to rearrange the internals of the class in any way she sees fit, including having as many or few private methods as she wants. Therefore, we should not have a test which looks into the implementation and makes the existence of a private method into a requirement.

Doing this not only violates the privacy of the object under test, it also couples the test more tightly with the implementation. This leads to more brittle tests and code which is harder to refactor. All in all, testing private methods is unnecessarily invasive and leads to less maintainable code.

It should be noted that “white box testing” (writing tests with the knowledge of how the code under tests works internally) does not mean that you must tightly couple your tests to the implementation. It just means that you can write clever tests which will precisely target critical points in the implementation code. You can (and should) still write your tests in terms of behavior.

An opportunity to improve the design

When you feel the need to test a private method, don’t ask “How do I test private methods?” Instead, ask “Why do I need to test this private method?” In many cases, wanting to test a private method indicates a design fault − a violation of the Single Responsibility Principle. The tests are often trying to tell you that the class under test is doing work enough for two. That the private method is complex enough to be worthy of a separate class.

The need to test a private method often indicates a new class waiting to get out.

My suggestion when you feel the need to test a private method is therefore to see if you can move the private method out of the current class in a way that not only makes it testable, but also improves the design.

The simplest way to do this is often to move the private method to a new class, along with any other private methods it uses, and make it public. Then make the original code use this new class to make the work of the private method. The image below illustrates such a case.

Extract a private method in need of testing to a separate class.

Extract a private method in need of testing into a separate class.

In some cases we don’t need to create a new class. Instead, we can sometimes make the private method into a public method on one of the classes it takes as arguments, especially if the method in question is static.

To summarize, if you have a hard time figuring out how to test some code (e.g. because it is private), it often means that the design is wrong. Fix the design issue rather than using brute force to test.

Write only the tests that you need

We write tests to help us rather than making our lives more difficult. However, unit testing and Test-Driven Development have often been advocated religiously – “you must do it 100% or you’re doing it wrong”! But that is not true.

If you look at the unit tests you’ve written the last year, how many of them have actually helped you? How many of them has caught a regression? How many helped a new programmer understand the code its testing? How many would still work if you refactored the code under test? I could go on. Unfortunately, in all likeliness, many of the tests are a waste of time and should be removed!

Probably the greatest misunderstanding regarding Test-Driven Development is that people focus on the “Test” part, when the latter words “Driven Development” are much more important. TDD is meant to help us drive the development of our system forward through tests, not to produce a fool proof set of tests. Of course, there are cases where we want to ensure that some complex code works as expected, but that is simply testing, not test-driven development. If you want to verify that some code works, by all means, write unit tests for it. If you want to use Test-Driven Development to help you develop your systems, write the tests that help you do that.

Especially, I think there are many cases where one test too many is unnecessary, or even harmful. Here are a few examples of such situations.

  • The cost if the code breaks is very low. In some situations, a bug doesn’t cause very much problems. Most people doesn’t write unit tests for their shell scripts, for example.
  • There is something more important to do. If a customer can’t even purchase your product, it doesn’t matter how good the unit tests for canceling an order are.
  • The code is too simple to reasonably fail. In many cases, the code is very unlikely to ever fail. The extremely occasional failure will most likely cost much less than writing and maintaining unit tests for it.
  • Setting up the test requires too much work. If you depend on something complex which is hard to fake, don’t waste your time writing that low level test. You’ll most likely fake the dependency incorrectly anyway. Leave it to a higher-level test.
  • The unit under test is a small private helper class. When the class you’re looking at is just a small private helper class for some bigger public class which performs some real business functionality, you probably don’t need to test the helper separately.

In the examples above, the tests does not help us drive the development of our system forward. Instead, they slow us down. So focus your energy on writing tests that help you  drive your development forward. Write tests for new features, for learning, for things you expect to break, for important edge cases, and for bugs that you fix. Beyond that, don’t write more tests.

I like the answer from Kent Beck on a “how much to test” question which sums this whole topic up rather nicely.

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it. I do tend to make sense of test errors, so I’m extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

A test should be a help, not a hinder

A test is meant to help you. If it does not help you, something is wrong.

There are many ways in which a test can be a hinder rather than a help. It can be tightly coupled with the implementation, making the code harder to work with. It can be long, complex and very hard to read. It can execute some code without actually validating anything, giving a false sense of security.

If you find yourself having a unhelpful test on your hand, I would recommend taking one of the following steps, in this order.

  1. Understand it. Make sure that the problem lies with the test, and not your understanding.
  2. Fix it. If you do understand it, and realize that it is a badly written test – fix it!
  3. Delete it. As a last resort, if fixing the test is not worth the cost, the test should be deleted!

Positive Return On Investment

To put it another way, we can borrow some terminology from economy: a test should have a positive Return On Investment (ROI). That means as follows.

The value you get out of a test should be higher than the cost to write and maintain it.

If this is not the case, the test needs to be fixed or deleted.

Keeping a bad test just because “it’s already written” is a dangerous road to take. Tests still cost money to maintain. If it is in a part of the code that isn’t changed anymore, fine, keep it. If it tests code that is under development, then do something about it.

Putting the problem before the solution

Recognize the following scenario? You get a description of a non-trivial problem. Before you’ve even heard the full description, your problem solving brain starts working on a solution. After a few quick adjustments of your idea, you feel you’ve successfully solved the problem and start coding.

I bet you recognize it. I bet you do it all the time – I sure do. If you have experienced this, I’m sure you’ve also noticed that this first attempt at a solution tends to be incorrect, especially for more complex problems. Often you realize later that you didn’t completely understand the complexity of the situation.

This is one of the reasons why I choose to write unit tests before implementation –  to ensure that I understand the problem before I decide on a solution.

Test-Driven Development is about design, but also learning

As you perhaps have heard, Test-Driven Development (TDD) is not really about testing, it’s about design. That is true. It encourages you to think about your  code from a usage perspective first, rather than an implementation perspective. You write code that uses the implementation-to-be before you write the actual implementation. The interface of the code to be written is colored by how you want to use the code, rather than how you plan to implement it.

While Test-Driven Development is about design, is also about learning. It makes you to think of the problem before the solution. I find the following rule of thumb helpful.

If you cannot describe the wanted behavior in a test method name, you are not ready to start coding.

If you follow this rule, you vastly reduce the risk that you waste your time on implementing a premature solution.

An old world map, by Rosario Fiore

A map reflects the ones understanding of the terrain.

Implementing the first solution that comes into your head would be like running as fast as you can while looking at your feet, not realizing if are running the wrong way. When you later realize that you’re not were you wanted, you will have to choose if you want to live with being in the wrong place, or spend even more time going to the right place. It’s more effective to pause for a few minutes, and then run to the right place from the beginning.

Another way to see it is that your understanding of the problem is your map to the problem. The more correct your map is, the more likely you are to end up where you want to.

Don’t be afraid of long test names

When writing unit tests, I often write long test method names such as the following. The example is from a data access class.

aFooIsPersistedAsABar()

anExistingBazIsReusedWhenPersistingAFoo()

anExistingBarIsUpdatedWhenRecivingAnotherFooForSameQux()

anExistingBarIsRemovedWhenThereIsNoCorrespondingFoo()

To some people, method names as long as this may feel… wrong. Simply too long. I think this is taking a rule which is good in one context as an absolute law. Short method names are preferable in implementation code, because you typically make one or more calls to the method. If the method name is too long the calling code becomes awkwardly word-wrapped and hard to read.

This is not the situation with tests, however. You typically never make a call to one of the test methods manually – the testing framework does this for you.

Some people doesn’t even think very long method names are allowed. They can be as long as you want. (Almost.)

An identifier is an unlimited-length sequence of Java letters and Java digits, the first of which must be a Java letter.

Of course, for most rules there is an equally valid rule saying the opposite. An unnecessarily long name just gets hard to read for no benefit. You don’t want your test names to extend into short stories, and you most certainly want to stay within the allowed line length you are using. But if you are used to writing test method names that are 5-15 characters long, go ahead and be a bit extravagant. Explain to me what the test actually does. Tell me both the input and the output. Use as many letters as you need (but no more).

Find each bug once

Despite our best efforts, every now and then a bug slips through. Sometimes it is caught early in our own testing, sometimes it goes all the way into production.

What happens when the bug is found? Hopefully, it is fixed, the fix is verified, and a new version is released. This happens all the time. In some organizations it is a formal process with multiple gates, in some it is a very informal one. The part which I find interesting is, if for some reason this bug was to appear again, how long time would it take to find and diagnose it?

A bug should only be found by a human once

If a human is involved in catching a regression, I believe you have failed! In my mind, finding the reappearance of a known bug should be a completely automated process. That means, for every bug you fix there should be an automated test which will fail if the bug reappears. It doesn’t matter if that test is a unit test, an integration test, a system test, or any other kind of test. The important part is that it is automated.

I don’t know how many times I’ve had the feeling of “didn’t we have this problem a few months ago?” only to later find out “yes, we did have this problem a few months ago.”

So, my suggestion is rather simple.

For each bug you fix, write an automated test to prove it!

If you do not do this, you either waste a lot of time by doing manual regression testing, or you are unprofessional enough to let your users be your testers.

A nice side effect of this practice is that over time you build a rather robust regression test suite for known, actual bugs. The tests also give you a form of documentation for all the special case handling that is often the mark of a battle-hardened system.