John is an experienced consultant specialising in Enterprise Java, Web Development, and Open Source technologies, currently based in Sydney, Australia. Well known in the Java community for his many published articles, and as author of Java Power Tools and Jenkins: The Definitive Guide, and founder of the open source Thucydides Automated Acceptance Test Library project, John helps organisations to optimize their Java development processes and infrastructures and provides training and mentoring in agile development, automated testing practices, continuous integration and delivery, and open source technologies in general. John is the CEO of Wakaleo Consulting, and runs several Training Courses on open source Java development tools and best practices. John is a DZone MVB and is not an employee of DZone and has posted 121 posts at DZone. You can read more from them at their website. View Full User Profile

Code Coverage as a Refactoring Tool

05.18.2011
| 6008 views |
  • submit to reddit
Using code coverage to help with refactoring, when combined with TDD, is a powerful tool. This article discusses how.

I am a big fan of using code coverage as a developer tool to promote more reliable, better tested code. The merits and limits of code coverage are widely discussed and fairly well known. Essentially, the big strength of code coverage is revealing code that has not been exercised by your unit tests. It is up to you, as a software development professional, to establish why this code has not been exercised by your unit tests, and whether it is significant. It is also up to you to ensure that your tests are of sufficient quality to ensure that the code that is exercised during the tests is effectively well tested.

In this article I don't want to discuss those aspects of code coverage. Instead I want to look at how code coverage can be a useful tool in a TDD practitioner's toolbox, in particular during the refactoring phase. Indeed, Using code coverage to help with refactoring, when combined with TDD, is a powerful tool.

But code coverage should be less relevant when you use Test Driven Development, should it not? TDD automatically results in 100% coverage, right? When you use a disciplined TDD approach, there should be a failing test justifying every line of code written. Conversely, every line of code you write goes towards making a test fail. It should therefore be impossible to have less than 100% code coverage if you are doing proper TDD. Lower than this just means you aren't doing TDD properly.

This is actually at best an over-simplification, and at worst, just wrong. Leaving aside minor issues in the code coverage tools, and language-related cases that arguably don't deserve coverage (e.g. a third-party API interface requires you to catch an exception, but in your implementation that exception can never occur, a private constructor in a static class designed precisely never to be called, and so on), code coverage holes can arise during the refactoring phase of Test-Driven Development, and code coverage metrics can be a useful aid to this refactoring process.

The refactoring phase is a critical part of the TDD process. Refactoring involves improving (often by simplifying) your code, to make it more clearer, more readable, easier to maintain, and so on. Although refactoring should never alter functionality (and therefore application behaviour from an external viewpoint), it can and often does involve some significant structural changes to your code. In these cases, code coverage can be a good indicator of areas that need tidying up.

Let's look at a few examples.

Untested classes

In the screenshot shown here, the class has 0% coverage. If you are developing using TDD, all of your tests will generally be tested at some stage, and most (if not all) will be tested directly by unit tests. A class with 0% test coverage in this context is indeed strange.

There are several possible explanations. Your class may effectively be tested by an integration or functional test. Alternatively, sometimes a class in one Maven module is effectively unit tested via a class in another module: in this case the coverage may not get picked up by your test coverage tool (Cobertura, for example, would not detect coverage in this case). In both cases, this is fine if it works for you, or if you can't do otherwise, but maybe your tests should be closer to the classes they are testing?

However, quite often, 0% coverage on a class indicates a class that is no longer used anywhere. That is the case here, so the class can safely be deleted.

Untested methods

Sometimes entire methods within a class may be passed over by the test coverage metrics. In other words, these methods were never invoked during the execution of the unit tests. If you are developing using TDD, a totally untested method will be rare, and should be considered a warning sign. If it a public method, why does it never appear in the executable specifications embodied by your unit tests?

For public methods, of course, the method might be invoked elsewhere, from another module, and tested during the integration or functional tests. On the other hand, it may be a method that is no longer needed after some refactoring. In this case, it should of course be deleted.

Untested lines or conditions

Skipped lines or conditions within a method can also sometimes raise red flags, especially if the test coverage was previously higher. Incompletely tested guard conditions are a particularly common form of this. In the following screenshot, Cobertura is showing that the null check is never being exercised completely - in other words, the description parameter is never null. There are lots of reasons why this check may have been placed there (sloppy debugging is a common one), but in any case, if we are practicing TDD with any rigour, this condition is no longer necessary.

In fact, as illustrated by the tweet from "Uncle" Bob Martin below, a guard condition, such as checking for null, particularly in a non-public method, is a flag that says "I don't know how this method is being used".

For example, consider the following code:

private doStuffTo(Client client) {
if (client != null) {
// do stuff
}
}

 

So why are we testing for null? Should this actually be written like this?

private doStuffTo(Client client) {
if (client != null) {
// do stuff
} else {
throw new WTFException();
}
}

 

If there really is a good case for a null value, other cleaner guard options might include using asserts or preconditions:

import static com.google.common.base.Preconditions.checkNotNull;

private doStuffTo(Client client) {
checkNotNull(client)
// do stuff
}

 

But of course it is even better to understand your code - why would a null be passed to this method in the first place? Isn't this a sign of a bug in the calling code? Where possible, I would prefer something like this myself:

private doStuffTo(Client client) {
// just do stuff
}

 

 

Indeed, if your tests cover all of the use cases for the public methods of a class, and the null pointer condition is still never being fully exercised, then maybe you don't need it after all. So ditch it and make your code simpler and easier to read!

 

And sometimes, just sometimes, they reveal a slip in your TDD practice - important business logic that is untested. You might be tempted to let it lie, but remember - in TDD and BDD, tests do a lot more than just test your code. They document the specifications and the design you are implementing, and go a long way to helping the next guy understand why you did things a certain way, and what business constraint you thought you were addressing. And maybe, just maybe, the untested code might contain a subtle bug that your tests will reveal. And although your cowboy developers will grumble in protest at all this extra thought and refection, when they could be just hacking code, this is the sort of unacknowledged extra value where processes like TDD and BDD really shine.

John is a well-known international consultant, trainer and speaker in open source and agile Java development and testing practices. He specializes in helping development teams improve their game with techniques such as Test-Driven Development (including BDD and ATDD), Automated Acceptance Tests, Continuous Integration, Build Automation, and Clean Coding practices. In the coming months, he will be running include online workshops on Test-Driven Development and Automated Web Testing for European audiences on May 31-June 3, running a full 3-day TDD/BDD/ATDD workshop in Sydney (June 20-22) and Wellington (date to be announced), and talking at the No Fluff Just Stuff ÜberConf in Denver in July.

Published at DZone with permission of John Ferguson Smart, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Henry Coles replied on Sat, 2011/05/21 - 3:57am

Traditional code coverage is of course limited to reporting code that has been exercised by tests, rather than code that has actually been tested.

I recently became interested in mutation testing, which plugs this gap and have written a tool that I've been applying to code both legacy and TDD code.

It's available at PIT

I've found targeted mutation testing particularly useful pror to a refactoring to identify test gaps. It has also proved useful identifying redundant code such as the null guard, but at a more grained level.  

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.