What Units to Test?

What to cover?

In my previous post I dismissed the notion that it is necessary to test every class of your system in isolation. I argued that
this is fixing the protocols at all layers of your application and thus making refactorings that shuffle responsibilities around more expensive. Perryn rightly pointed out that the refactoring I proposed should have been covered by some kind of a regression test. So I assume another layer of tests. It would look something like this, where the red circles mark object that are tested together. While the lower right unit can be fully integration tested, the other units actually need stubbed or mocked out collaborators. I think it’s worthwhile to strive for components that have little external dependencies in some cases – pushing as much functionality towards the leaves as Alex recommended.

What is the problem with this approach? Well you definitely should have an integration test a the hightest possible level. In practice this can be difficult, as these things might run in different processes, on different machines, being implemented in a variety of technolgies (javascript on IE6 anyone?). Ideally however we have something along these lines:

Looking at the graph again we might also go for the following component, as it needs only one stubbed out dependency:

On the other hand if we now look at the whole picture of tests, that I mentioned (adding the naïve unit tests), we end up with this picture:

That is probably an overkill. So what is the point of this post? There is a terribly high number of possible tests. As a developer you have to make a call, which ones to go for. Everything else is to expensive and prevents change (Test Sclerosis!). The most confidence in the system is probably gained from system level integration testing. It ensures the system works and it is a level
that can actually drive your design in a meaningful way (breaking a system into several components being the actual design effort). However unit tests are quicker to execute and easier to write even in a messy architecture (do you have an object representing that tool your user works with every day, or is it just a bunch of javascripts, a bit of templating and some domainish code to pull things off a database?).

Another thing that these example reveal is that the terms unit test and integration test are relative. This means we probably have to define implicitly or explicitly, what we refer to, when talking about integration tests and unit tests.

Trivial unit tests

I get increasingly annoyed with what I call trivial unit test.
People are obviously writing tests for the sake of unit test coverage. I observed the following patterns, which were what I would call overly isolated from their collaborators:

  • Testing a factory building a composite decorator. The unit test did not test the behaviour of the composite decorator, but the fact that certain decorators have been composed:
    // Production code public class Decorator Factory { public Decorator getDecoratorForRendering() { return new CompositeDecorator( new HtmlDecorator(), new ParagraphDecorator(), new ImageDecorator() ); } } // Test code public void testShouldCreateDecoratorForRendering() throws Exception { DecoratorFactory factory = new DecoratorFactory(); CompositeDecorator compositeDecorator = factory.getDecoratorForRendering(); Decorator[] decorators = getDeocratorsFromComposite(compositeDecorator); int i = 0; assertEquals(HtmlDecorator.class, decorators[i++].getClass()); assertEquals(ParagraphDecorator.class, decorators[i++].getClass()); assertEquals(ImageDecorator.class, decorators[i++].getClass()); }

  • The mock setup mirrors the actual implementation code. I found one example where
    even a for-loop was mirrored in the expectation setup:

    // Production code public void reprice(List<Product> productList, PricingPolicy policy) { for (Product product : productList) { productRepricingService.reprice(product, pricingPolicy); } } // Test code public void testShouldRepriceAllProducts() { for (Product product : productList) { productRepricingServiceMock.reprice(product, pricingPolicy); } finishedMockSetup(); batchRepricingService.reprice(productList, pricingPolicy); }

All these tests have a negative net value. There is absolutely nothing gained from them, but on the other hand there is an additional burden in terms of maintenance, build times, and clarity.

Why to test in isolation?

Some reasons I could come up with for testing things in isolation:

  • The unit under test solves a very well defined problem and can be reused in different contexts.
  • Reasoning about all kinds of boundary conditions is easier on the unit level. (Caveat if there is
    no way to drive certain cases from the application level, they might not be needed).
  • Performance of tests as well as debugging (I see tests and debugging as complimentary rather than
    exclusive things to do).
  • Limited availablity of external systems. They might not be implemented yet, or simply not installed
    in the test environment.

In all cases the unit should be complex enough, that the test case actually tells me something I can’t spot immediately by looking at the code.

Bottom line: I have come to the conclusion that in entreprise applications the complexity arises from the interplay of all components. Also the single responsibility principle is sometimes difficult to achieve, as there are some concepts that are central to the business and tend to be overstreched (especially when persisted to RDBMS, which helps beeing reluctant about fine grain objects).
Hence I think integration test automation is more important than unit testing. However, if complex functionality arises within a subsystem, it should be nicely isolated and unit tested.


Posted

in

by

Tags:

Comments

3 responses to “What Units to Test?”

  1. Johannes Link Avatar

    “Hence I think integration test automation is more important than unit testing.”
    This seems like a reasonable and intuitive conclusion at first glance. That said, I tend to disagree:
    Many people, including myself, have found that test-driving with fine-grained unit tests leads to very stable applications and that bugs on the integration don’t turn up very frequently. This might be unintuitive, but it has been experienced by many teams.
    Given the speed and focus of fine-grained unit tests vs the complexity you often run into when going to the integration level, I now push most of my tests to the isolated unit level, after years of trying to expand the integration and system level tests.

  2. felix Avatar
    felix

    “Many people, including myself, have found that test-driving with fine-grained unit tests leads to very stable applications and that bugs on the integration don’t turn up very frequently.”

    I am not argueing against unit tests in general (well, at least I didn’t mean to ;-)). I just think that the units you test shouldn’t neccessarily be single objects, with all collaborators stubbed/ mocked out.
    It seems to me that in order to reap benefits from unit testing the units/ components need to test well defined, and narrow interfaces. Sometimes this involves also running code in java.lang.String in your unit test.

    I think the trouble I have is owed to the eager use of mocking frameworks. So instead of thinking about how to nicely isolate functionality (which is a good thing), poeple just inject and stub out everything in the testcase, hence missing the goal of a truly loosely coupled system.

  3. Piccolo Principe Avatar
    Piccolo Principe

    Unit testing is important in the way that it helps to write better code, because designing for testability is good design. The most famous work on this subject is Feynman’s Personal observations on the reliability of the Shuttle, that discuss the bottom-up vs. top-down design.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.