John is an experienced consultant specialising in Enterprise Java, Web Development, and Open Source technologies, currently based in Sydney, Australia. Well known in the Java community for his many published articles, and as author of Java Power Tools and Jenkins: The Definitive Guide, and founder of the open source Thucydides Automated Acceptance Test Library project, John helps organisations to optimize their Java development processes and infrastructures and provides training and mentoring in agile development, automated testing practices, continuous integration and delivery, and open source technologies in general. John is the CEO of Wakaleo Consulting, and runs several Training Courses on open source Java development tools and best practices. John is a DZone MVB and is not an employee of DZone and has posted 125 posts at DZone. You can read more from them at their website. View Full User Profile

For a Fistful of Dollars: Quantifying the Benefits of TDD

  • submit to reddit

According to a recent scientific study, using TDD increases development (coding) time by 15-30%, but results in 40-90% fewer defects. This study was done with 4 different development teams, from IBM (1 team) and Microsoft (3 teams), whose development practices are nothing if not pragmatic. This actually confirms what any TDD practitioner will tell you: you spend a bit more time writing tests up front, but the quality of the resulting code is largely superior.

This is not the only study in this area. There are many others. Uncle Bob Martin sums up quite a few in one of his blog entries on the subject. And Misko Hevery, for example, recently spent two weeks tracking the time spent writing tests, and came to the conclusion that his team spends around 10% of their time writing tests (thanks to Alejandro Cuesta for pointing out this study).

It is interesting to note that the figure of 15-30% longer during the coding phase. Then, the testers found 40-90% fewer bugs. That's 40-90% fewer bugs that need fixing. Now, these bugs that need fixing were found in the functional testing phase. Exact figures will vary, but it is frequently observed that bugs found here will take at least 10 times longer to fix than had they been found during development. When you factor that in, investing 15-30% extra time to write unit tests makes a lot of sense.

A concrete example: the other day, I heard about an internal study at one company, using a more "traditional" (aka "waterfall") approach. The time spent in each phase was: 5 months analysis, 2 months coding, and 7-8 months debugging. So 4 times as much time was spent debugging the code as was spent coding it in the first place! What if you could reduce that 7-8 months by 40-90% (so saving between 3 and 7 months), at the cost of spending another couple of weeks of unit tests using a TDD approach?

And this doesn't even take into account the most important benefits of TDD and BDD. Remember, TDD is not a testing technique, it's a design strategy. TDD code is almost always of much higher quality, more flexible and easier to maintain than the same code developed using more traditional methods. And TDD has an important effect on developers, in addition to encouraging clean design practices: it helps them focus on the requirements. Another internal study showed that one large waterfall-driven project implemented less than 50% of the documented requirements, and that more than half of the implemented requirements were never actually used by the users.

Other interesting lessons reported by the study (which in fact reflect industry best practices in the TDD space), where:

  • It is important to start TDD from the beginning of projects
  • Write a new unit test every time you need to fix a bug
  • Use Continuous Integration
  • Encourage fast unit tests

These principles are well-known to TDD practitioners, but always it's nice to seem them confirmed by research.

However TDD is not all roses. It's hard to learn for a lot of developers - unlearning old habits often is. Less experienced TDDers often forget the refactoring part of the test-code-refactor cycle, which means that the TDD process can't do it's job as a design practice. And everyone needs to be on the boat, including management. It's hard (not impossible, just hard) to do TDD by yourself when everyone else is doing things the traditional way.

Nor is TDD appropriate for all development situations, or a replacement for high-level architecture or design. Fine-tuning user Interfaces can be tricky, for example. Uncle Bob Martin has written an excellent blog entry on this topic.

So TDD is not a silver bullet. But it is a confirmed engineering practice that works very well indeed, for a great many situations. The overhead is minimal, the gains huge. So if you're not doing it, you should give it a try. A great place to start is Lasse Koskela's excellent book Test Driven: TDD and Acceptance TDD for Java Developers. Or, in the .NET space, Test-Driven Development in Microsoft .NET, by James Newkirk and Alexei Vorontsov. There are also many excellent TDD training courses around, including the new "Testing and TDD for Java Developers" course which I will be kicking off in New Zealand and Australia over the coming months.


Published at DZone with permission of John Ferguson Smart, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)



Artur Biesiadowski replied on Mon, 2009/10/12 - 7:34am

You can really get different conclusions from the same data.

For example, IBM driver project, one using TDD and one not using it.

TDD 41 KLOC in 95 MM =  432 LOC/MM

Non TDD 56.6 KLOC in 45 MM =  1257 LOC/MM

Assuming that LOC = LOC in same type of project from same company, looks like cost of TDD is to triple coding time - most probably because you have to go through hoops to make everything testable in very small steps.

Defect density of TDD is 0.61 of NonTDD project. Hard to say how much it is in real world, but improvement doesn't look impressive. We are talking here about 5-10KLOC per developer. They could be written in 9 months by team of 5 with NonTDD  and then you have another 10 months for each developer to say "get a code written by your friend, stare on it and find mistakes". I really cannot believe that you could not get better than spotting 40% of defects by proof reading 10KLOC over period of 10 months...

And then you get

Increase in time taken to code the feature because
of TDD (%) [Management estimates]

Obviously, management could not estimate it to 200% with such defect improvement ratio...

John Ferguson Smart replied on Mon, 2009/10/12 - 8:51pm

Be careful not to confuse LOC with productivity. In my experience (and, visibly, that of others as well), non-TDD development is much less focused, and results in (a) unnecessary (and ultimately unused) code, and (b) code that is less consise (probably due to the emphasis on refactoring in TDD). If you apply TDD from the outset, you actually don't have to "go through hoops", on the contrary it's quite a natural and fluid process.

That said, the extra time taken by the TDD teams suprises me a little - I suspect the fact that these teams were largely new to TDD may have contributed. The IBM approach to TDD, if you read the details, seems a little on the rigid side (UML and Sequence Diagrams for initial design, a test per method rather than a more BDD-style approach...). The Microsoft approach seems somewhat sub-optimal as well (the testing framework was run from the command line, and the resulting log files analysed...). So, these approaches to TDD are better than nothing. but far from an example of TDD at its best. From personal experience, I find that TDD, when done well, does not significantly add to development time - in fact, there are many cases where developers can easily get bogged down when not using TDD. For experienced TDDers, TDD helps maintain a smooth flow of development which is, at the end of the day, very productive.

Also, don't forget that it is very easy to write lots of lines of code quickly if you remove the constraint that it has to work - how much more time did it take to fix the extra bugs in the non-TDD code? Between 2.5 and 10 times as many bugs in the non-TDD code, that's quite a bit of fixing, especially if it's done some time after the code has been written.

Thanks for your comments!

Cloves Almeida replied on Mon, 2009/10/12 - 10:33pm

If you considerLOC a productivity measure, the most productive programmer I know is a fellow COBOL programmer.

Test code tends to be volumous, but easy to write. Most of time is fixture (setting up data-objects, mocks, etc.)

Consider the code below taken from JBoss Seam's documentation. Most of it is set-up. Volumous, but easy to write.

 public testRegisterAction()
EntityManager em = getEntityManagerFactory().createEntityManager();

User gavin = new User();
gavin.setName("Gavin King");

RegisterAction action = new RegisterAction();

assert "success".equals( action.register() );





Artur Biesiadowski replied on Tue, 2009/10/13 - 8:17am

One note about cost of fixing bugs. Generally, unit tests tend to find out easiest bugs - ones which would not take long to investigate and fix. Harder ones (deadlocks, starvations, infrastructure failures, jvm bugs, race conditions, 'interactivness' failures, etc) are generally not easy to test upfront and will happen regardless of TDD or no TDD - I find peer review/pair programming to be only working (or at least half-working) solution here.

I wonder if anybody has done any research (and if it is possible to measure it at all) - if TDD is finding 80% of non-TDD bugs early, isn't it Pareto-like relation (finding 80% of bugs which would require only 20% time to fix even if found later).

@Cloves - I'm in no way saying that LOC is a measurement of productivity globally. But, in this particular case, study was done quite properly - finding similar domain, same language, same company, similar size of teams. Unless TDD is changing LOC per functionality considerably, I think it is fair to compare LOC in this particular context.


Shaw Gar replied on Tue, 2009/10/13 - 11:27am

I think TDD requires fairly good programming experience on the part of developer. It's not easy to write tests for beginner programmers, as they are learning to code, it's harder to think how to learn to write tests for the code they are learning to code... Also, project deadlines dictate how much you can follow on TDD, and so is the case when you are learning/using new frameworks which is so common in Java world. TDD is more often discussed in blogs/articles and less often used in real world projects - this has been my experience, so don't shoot me ;-)

Mrmagoo Magoo replied on Tue, 2009/10/13 - 11:54pm in response to: Cloves Almeida

If you considerLOC a productivity measure, the most productive programmer I know is a fellow COBOL programmer.

When people measure LOC for comparison (e.g. VC funding) they take into account and standardize between language generations and types. Of course whether LOC is a valid measure even within a language across teams is highly debatable.

John Ferguson Smart replied on Wed, 2009/10/14 - 1:01pm

I think the LOC issue is missing the bigger picture. It is fairly well observed that one of the issues with traditional development practices is that you end up writing features (and code) that are not used and/or not requested by the end user, and at the same time miss important features or scenarios that the users really do need (whether they mentioned them or not - important corner cases, for example, that show up as bugs afterwards). The refactoring in TDD shouldn't be underestimated, either. In TDD, you may well write less code, but you are much more likely to write the correct code, and in a way that is much better designed and easier to maintain. Again, maintenance costs are not visible here, but are very, very real, and significantly higher (by orders of magnitude, I suspect), for non-TDD code.

Artur Biesiadowski replied on Thu, 2009/10/15 - 2:52am in response to: John Ferguson Smart


I don't think that TDD makes you write less 'waste' code.  Other agile practices, which often go together with TDD, do, but TDD itself is not much better, given same quality analysis of the requirements.

As far as maintenance costs are concerned, it is double edged sword. You save something on maintanance cost of the program, but you pay something extra on maintenance cost of the tests. Indeed, savings are probably lot higher than costs, but I think it is important to remember that there is an ongoing cost to TDD.

As far as refactoring and clean code is concerned, I have to agree fully - TDD is a major improvement here.

And while we are on the topic, I will ask again (so far, nobody ever answered me on previous discussions). Can you point me to open source, TDD project, which is at least let's say 50-100 KLOC, is non-trivial (something smarter than 1000 beans with DAOs and htmls to edit them in database, preferably heavily multithreaded/parallel and/or GUI/desktop based) and can be shown as example of PROPER TDD ?

John Ferguson Smart replied on Thu, 2009/10/15 - 12:54pm in response to: Artur Biesiadowski

You are correct, Arthur, in saying that TDD alone is not a complete answer to the 'waste code' issue, although TDD/BDD does help a great deal with waste code on a technical level (less gold plating). I agree that for fundamental matching of specified requirements to real user needs, you need other agile practices as well. Regarding open source projects that use TDD/BDD in practice, have you looked atfitness or easyb?

Artur Biesiadowski replied on Fri, 2009/10/16 - 8:05am


Easyb seems to be very small project (unless I'm missing 40+ KLOC somewhere).  I'll take a closer look at fitness - from cursory overview, it seems to have breached the size threshold where things might be interesting.

Thanks for the hint.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.