Aaron has posted 3 posts at DZone. View Full User Profile

Are Bad Tests Worse Than No Tests At All?

  • submit to reddit

Can you write a test that does more harm than good?

In the intro to his upcoming book The Art of Unit Testing, Roy Osherove describes a case where poorly-designed tests actually hurt his team's progress:

Worse yet, some tests became unusable because the people who wrote them had left the project and no one knew how to maintain the tests, or what they were testing. The names we gave our unit test methods were not clear enough. We had tests relying on other tests. We ended up throwing away most of the tests less than 6 months into the project.

I'm on the fence. Anything that exercises your code in a automatic, repeatable way has got to be useful. On the other hand, poorly written tests cost time and effort to understand, refactor, and fix.

Are the drawbacks to bad tests worse than having no coverage at all?

I think  the answer is that in the short term, even bad tests are useful. Trying to squeeze a extra life out of them beyond that, however, pays diminishing returns.

Just like other software, your tests should be built for maintenance, but in a crunch, you can punch something in that works. It's better to have bad tests than to have untested code.

From http://codesoftly.aaronoliver.com/

Published at DZone with permission of its author, Aaron Oliver.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)



Manrico Corazzi replied on Fri, 2008/07/04 - 7:46am

What's your definition of "bad tests"?

Poorly written, overcomplex, pointless, hard to maintain, strongly coupled, too fine grained, too coarse grained, not covering the whole codebase, poorly documented?

I'd say that a misleading test (i.e. that hides a weak spot) might be in the top five because it creates overconfidence. Tests should make you confident about your code and your changes, not let you think your code is unbreakable when it isn't.

Aaron Oliver replied on Fri, 2008/07/04 - 10:53am in response to: Manrico Corazzi

Manrico, the same sentiment about misleading tests also showed up in the comments from the original post.

I do agree that it's important to watch out for misleading tests. As you said, they create blind spots, which can be embarassing when you declare your code "covered", and then have to fix a rinky-dink bug that your tests hid.

Farquhar replied on Mon, 2008/07/07 - 3:08am

I've just been in a situation where a bloated, incomprehensible test suite had stopped all effective development. Most of the devloper's time was spent fixing tests after trivial code changes. Worst of all, no-one knew which tests were relevant any more. In this sort of pathological situation I couldn't help but feel that a drastic pruning job was the only approach. But then you have to sell it to management - "What? you're refactoring?? and its only <b>test code???</b>"


Jeroen Wenting replied on Tue, 2008/07/08 - 6:21am

bad tests can easily give a team a false sense of security. "It passes the tests so it must be good" is an oft-heard claim, but people rarely bother to check the tests themselves for validity.

I've often asked people "who tests the tests", and most people have no answer to that question, leading me to believe that most tests are unreliable because they aren't certified to be testing for what they are supposed to test for.

Aaron Digulla replied on Wed, 2008/07/09 - 2:25am

Tests are like any other code: They can go bad. In my career, I've found that it's surprisingly hard to write good tests if you have no experience in doing so. People starting to write tests make them too complex, too long, let them have too many dependencies and they take too long to run. If you're in such a situation, you have to face the fact that you just programmed you in a corner and you must spent the effort to get out of there. When it hurts, something is broken and it won't stop hurting unless you fix it. So in this sense, I say that bad tests are better than no tests because they tell you early that you need to fix something. And as for the management: I've never had a problem to sell myself to them. I usually figure that I spend 50% of my time or more writing tests and 50% actual coding - and I'm still faster than those who code for 80% of the time or more. The net result is that my code which goes into production is rock solid or at least easy to fix when something comes up. In 99% of the cases, the things I need to fix were those which I didn't test. This is a positive reinforcement loop which drives me to test more and more because it stops the hurting.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.