DevOps Zone is brought to you in partnership with:

I am a software engineer at Google on the Android project and the creator of the Java testing framework TestNG. When I'm not updating this weblog with various software-related posts or speaking at conferences, I am busy snowboarding, playing squash, tennis, golf or volleyball or scuba diving. Cedric is a DZone MVB and is not an employee of DZone and has posted 90 posts at DZone. You can read more from them at their website. View Full User Profile

The Pitfalls of Test-Driven Development

05.15.2014
| 10195 views |
  • submit to reddit

A few days ago, David Heinemeier Hansson posted a very negative article on Test-Driven Development(TDD) which generated quite a bit of noise. This prompted Kent Beck to respond with a Facebook post which I found fairly weak because it failed to address most of the points that David made in his blog post.

I have never been convinced by TDD myself and I have expressed my opinions on the subject repeatedly in the past (here and here for example) so I can’t say I’m unhappy to see this false idol finally being questioned seriously.

I actually started voicing my opinion on the subject in my book in 2007, so I thought I’d reproduce the text from this book here for context (with a few changes).

The Pitfalls of Test-Driven Development

I basically have two objections to Test-Driven Development (TDD).

  1. It promotes microdesign over macrodesign.
  2. It’s hard to apply in practice.

Let’s go over these points one by one.

TDD Promotes Microdesign over Macrodesign

Imagine that you ask a famous builder and architect to construct a sky scraper. After a month, that person comes back to you and says

“The first floor is done. It looks gorgeous; all the apartments are in perfect, livable condition. The bathrooms have marble floors and beautiful mirrors, the hallways are carpeted and decorated with the best art.”

“However,” the builder adds, “I just realized that the walls I built won’t be able to support a second floor, so I need to take everything down and rebuild with stronger walls. Once I’m done, I guarantee that the first two floors will look great.”

This is what some premises of Test-Driven Development encourage, especially aggravated by the mantra “Do the simplest thing that could possibly work,” which I often hear from Extreme Programming proponents. It’s a nice thought but one that tends to lead to very myopic designs and, worst of all, to a lot of churn as you constantly revisit and refactor the choices you made initially so they can encompass the next milestone that you purposefully ignored because you were too busy applying another widespread principle known as “You aren’t going to need it” (YAGNI).

Focusing exclusively on Test-Driven Development tends to make programmers disregard the practice of large or medium scale design, just because it no is longer “the simplest thing that could possibly work”. Sometimes it does pay off to start including provisions in your code for future work and extensions, such as empty or lightweight classes, listeners, hooks, or factories, even though at the moment you are, for example, using only one implementation of a certain interface.

Another factor to take into consideration is whether the code you are writing is for a closed application (a client or a Web application) or a library (to be used by developers or included in a framework). Obviously, developers of the latter type of software have a much higher incentive to empower their users as much as possible, or their library will probably never gain any acceptance because it doesn’t give users enough extensibility. Test-Driven Development cripples library development because its principles are at odds with the very concept of designing libraries: think of things that users are going to need.

Software is a very iterative process, and throwing away entire portions of code is not only common but encouraged. When I start working on an idea from scratch, I fully expect to throw out and completely rewrite the first if not the first two versions of my code. With that in mind, why bother writing tests for this temporary code? I much prefer writing the code without any tests while my understanding of the problem evolves and matures, and only when I reach what I consider the first decent implementation of the idea is it time to write tests.

At any rate, test-driven developers and pragmatist testers are trying to achieve the same goal: write the best tests possible. Ideally, whenever you write tests, you want to make sure that these tests will remain valid no matter how the code underneath changes. Identifying such tests is difficult, though, and the ability to do so probably comes only with experience, so
consider this a warning against testing silver bullets.

Yes, Test-Driven Development can lead to more robust software, but it can also lead to needless churn and a tendency to over-refactor that can negatively impact your software, your design, and your deadlines.

TDD Is Hard to Apply

Test-Driven Development reading material that I have seen over the years tends to focus on very simple problems:

  • A scorecard for bowling
  • A simple container (Stack or List)
  • A Money class
  • A templating system

TDD works wonders on these examples, and the articles describing this practice usually do a good job of showing why and how.

What these articles don’t do, though, is help programmers dealing with very complex code bases perform Test-Driven Development. In the real world, programmers deal with code bases comprised of millions of lines of code. They also have to work with source code that not only was never
designed to be tested in the first place but also interacts with legacy systems (often not written in Java), user interfaces, graphics, or code that outputs on all kinds of hardware devices, processes running under very stringent real time, memory, network or performance constraints, faulty hardware, and so on.

Notice that none of the examples from the TDD reading materials fall in any of this category, and because I have yet to see a concrete illustration of how to use Test-Driven Development to test a back-end system interacting with a 20-year-old mainframe validating credit card transactions, I certainly share the perplexity of developers who like the idea of Test-Driven
Development but can’t find any reasonable way to apply it to their day jobs.

TestNG itself is a very good candidate for Test-Driven Development: It doesn’t have any graphics, it provides a rich programmatic API that makes it easy to probe in various ways, and its output is highly deterministic and very easy to query. On top of that, it’s an open source project that is not subject to any deadlines except for the whims of its developers.

Despite all these qualities, I estimate that less than 5% of the tests validating TestNG have been written in a TDD fashion for the simple reason than code written with TDD was not necessarily of higher quality than if it had been delivered “tests last.” It was also not clear at all that code produced with TDD ended up being better designed.

No matter what TDD advocates keep saying, code produced this way is not intrinsically better than traditionally tested code. And looking back, it actually was a little harder to produce, if only because of the friction created by dealing with code that didn’t compile and tests that didn’t pass for quite a while.

Extracting the Good from Test-Driven Development

The goal of any testing practice is to produce tests. Even though I am firmly convinced that code produced with TDD is not necessarily better than code produced the traditional way, it is still much better than code produced without any tests. And this is the number one lesson I’d like everybody to keep in mind: how you create your tests is much less important than writing tests in the first place.

Another good quality of Test-Driven Development is that it forces you to think of the exit criteria that your code has to meet before you even start coding. I certainly applaud this focus on concrete results, and I encourage any professional developer to do the same. I simply argue that there are other ways to phrase these criteria than writing tests first, and sometimes even a simple text file with a list of goals is a very decent way to get started. Just make sure that, by the time you are done with an initial version, you have written tests for every single item on your list.

Don’t test first, test smart.


Update: Discussion on reddit

Published at DZone with permission of Cedric Beust, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Dave Glasser replied on Sat, 2014/05/17 - 11:25pm

I've been developing professionally for 16 years, and in that time I've witnessed a lot of debate surrounding the various fads and methodologies that have arisen. Their proponents virtually always claim that following them will result in better software in a shorter amount of time, regardless of the type of project or other circumstances.

What I think is the most critical factor in ultimately determining the success or failure of a software project is something that's rarely talked about, especially by the methodology-of-the-month peddlers: the skill level of the developers. This is based on my own observations over the years. Excellent, motivated developers tend to produce excellent results. Average-to-mediocre developers tend to produce average-to-mediocre results. The process they follow may factor somewhat in the end results, but not much by comparison. (Now if the project management is utterly incompetent, all bets are off. That can sink any project.)

The wide distribution of skill levels among programmers has been discussed in a few of the classic books on software development, but it doesn't seem to come up much on sites like dzone or developer blogs. I think if it were talked about more, a lot of developers would be motivated to better themselves as individual programmers, rather than trying to hitch their wagons to the latest fad methodology that's come along.


Russell Bateman replied on Wed, 2014/05/21 - 8:36am

It may be somewhat gratuitous to say, but when you're using a technique or tool, don't be stupid just because the radical priests from its lunatic fringe insist upon it. And if you reject it just because it's attracted mindless, self-righteous prigs, then where's a tool or technique you can use in all good conscience?

Paul Campbell replied on Wed, 2014/05/21 - 11:31am

Testing of any form involves tradeoffs. Briefly the pros and cons of TDD are as follows:

Pros:

  1. Encourages decoupled design
  2. Yields excellent logic path coverage
  3. Eliminates dead code
  4. Provides fast feedback
  5. Helps diagnose localized logic errors
  6. Easy to apply systematically by following simple rules
Cons:
  1. Structure of tests closely tracks structure of implementation which inhibits re-factoring (since you cannot for example reassign class responsibilities without breaking tests even if overall functionality remains unchanged)
  2. Encourages over/premature investment in the design of individual class APIs before they have been ratified by being fit into wider collaborations.
Most non trivial code bases have parts where the cons will outweigh the pros, and vice versa of course.

Aaron Evans replied on Wed, 2014/05/21 - 12:32pm

deleted by author


Ad Ax replied on Wed, 2014/05/21 - 12:40pm

Nicely Put. TDD reduces productivity up to some extent considering that, extreme programming such as pair programming takes 2 persons to work on a task together. As author mentioned, it takes lot of work in terms of re-designing, re-factoring and coding when there is big feature need to be implemented in terms of effort estimation.

Dirk Detering replied on Wed, 2014/05/21 - 11:05pm

@Dave Glasser: the topic of skill level never becomes more obvious as when you teach apprentices. We teach them TDD, esp. Test First, simply because the code would otherwise render untestable very quickly and become subject to refactoring simply because of that. OTOH the low skill level leads to tests done wrong and being bunches of code trying to overcome the test obstacles introduced by the bad design.

It is like with any other art: first you have to learn the hard basic practices. Then you can relax them later, when you become more experienced. I.e.: Test last only works when the code written so far is testable at all and has been written with that in mind at least.

Will Mason replied on Thu, 2014/05/22 - 12:44am

Hi,  I am going to agree with point #1; and debate point #2.  I think effective TDD is easier and more costs effective than the alternative(s).  Every time  I see a comment like, "it is hard", I need to ask myself, "harder than what?"

It is possible that TDD 'can' promote micro-design over a more wholistic approach.  So DON"T micro-design, friends we are all grown-ups and expected as professionals to make good choices.  If you project lends itself to micro-design, like a hobby or something experimental, I still say at some point you will want to ask the big questions like: what is it I'm trying to achieve overall?

Some of the things I can suggest to limit micro-design tendencies are to "test blocks", rather than attributes, changes or features.             A block is how you define it, but think of an interface, API sub-set or a module -- A code unit that makes sense, an area of functionality (not necessarily the same code unit) that makes sense.  Test on outcomes.  Most of the time your goals won't change so by testing the results or behaviour you are looking for, there's a good chance the tests won't need to change much too.  Meta-program where possible.  One of my colleagues wrote a class reflection tool that inspected a class and called all the methods  in certain ways for standard tests based on return type.  This utility proved invaluable as a sweeper to keep things tidy.  Get creative ;-)

I think that TDD would only feel hard until, we become more skilled.  It is a skill like other skills and gets better with practice.  If you want to go to the World Cup, you don't say "soccer is hard" you say, let's play more, get coaching, find tougher opponents!  It is the alternatives that let people feel it is less difficult to not test.  Hey may be someone other fool will find the bug and even fix it.  Saves my time, doesn't it?  So you need to look that the economics of the WHOLE project -- If TDD means your team mates aren't slowed down by other bugs like speed humps, then it is better for you to TDD.  If you are not in the camp that says bugs found early are cheaper to fix and find, then nothing I can say will change that perception.  Good luck with that.

I feel we should have the big picture design.  My first introduction to TDD was as an undergraduate looking at the differences between top-down and bottom-up approaches. When we developed in a top-down way the testing was more structured and required less refactoring than a bottom-up approach.  That's because you definitely need to refactor more doing bottom-up imho for all but simple examples.

So things can be hard, and they can be less hard.  What ever 'we' do we should make sure code is tested so that the only bugs we let escape are design and analysis bugs not implementation bugs.  One often overlooked part of the TDD is the "design", the good test design is intended to reduce design bugs in your app by exercising the design as well as testing the code.  That brings me back to the idea of having the big picture, having some architectural model that lets you test by component and 'unit of work', etc.  When you TDD, you should be Testing the Design.

Paul Campbell replied on Thu, 2014/05/22 - 5:41am

"A block is how you define it, but think of an interface, API sub-set or a module -- A code unit that makes sense, an area of functionality (not necessarily the same code unit) that makes sense.  Test on outcomes.  Most of the time your goals won't change so by testing the results or behaviour you are looking for, there's a good chance the tests won't need to change much too."

In my experience people generally take TDD to mean highly granular testing, generally down to the individual class level and its this issue of granularity around which the pain points of TDD exist rather than just the issue of test before/after. So I agree with the statement you make above, but in eyes of many TDD purists, testing at the module/layer level would not be "TDD".

Dave Glasser replied on Thu, 2014/05/22 - 7:56am in response to: Dirk Detering

@Dave Glasser: the topic of skill level never becomes more obvious as when you teach apprentices. We teach them TDD, esp. Test First, simply because the code would otherwise render untestable very quickly and become subject to refactoring simply because of that. OTOH the low skill level leads to tests done wrong and being bunches of code trying to overcome the test obstacles introduced by the bad design.

So your experience has been similar to mine. Mediocre developers produce mediocre results.TDD doesn't really change that.

Lund Wolfe replied on Sat, 2014/05/24 - 6:03pm

I completely agree with Dave that quality developers build quality software.  Quality is baked in or it isn't.  Defects and quality are much more critical in the early stages of requirements/analysis and design.

That said, learning a tool or methodology, like TDD, is much easier than improving developer skill.

All things being equal, TDD at least implies that there are unit tests, and I prefer a bad design with unit tests to one without.  Those tests won't help with core quality but they can reduce the risk and keep the application from getting worse, including the customer not seeing nearly as many defects, when it should get significantly worse as it grows.  Those tests can also serve as a health indicator of the project.  Being testable and functional (test first for correct functionality) at the micro level is better than not at all.  Design is hard.  Practically, I'm not convinced that not doing unit tests or not doing TDD is going to suddenly improve the design.

TDD and low level unit tests provide a late stage form of quality.  They may have a more positive impact on a mature project in maintenance for defects and enhancements.  I've always been suspicious of TDD for new applications.  It seems like it is naturally driven by speed and a bottom-up design with less high level thinking, contemplation, and brainstorming.  As mentioned previously, the benefit of TDD is being testable, modular, organized, flexible, maintainable, but this is of much more value at a higher application level than at a unit test or function point level.

Part of the problem with TDD is the challenge of agile in general.  Can you design/build the application in stages, doing (or not doing) significant refactoring as needed.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.