Jay Fields is a software developer at DRW Trading. He has a passion for discovering and maturing innovative solutions. His most recent work has been in the Domain Specific Language space where he's delivered applications that empowered subject matter experts to write the business rules of the applications. He is also very interested in maturing software design through software testing. Jay is a DZone MVB and is not an employee of DZone and has posted 116 posts at DZone. You can read more from them at their website. View Full User Profile

The Maintainability of Unit Tests

02.25.2010
| 9780 views |
  • submit to reddit
At speakerconf 2010 discussion repeatedly arose around the idea that unit tests hinder your ability to refactor and add new features. It's true that tests are invaluable when refactoring the internals of a class as long as the interface doesn't change. However, when the interface does change, updating the associated tests is often the vast majority of the effort. Additionally, if a refactoring changes the interaction between two or more classes, the vast majority of the time is spent fixing tests, for several classes.

In my experience, making the interface or interaction change often takes 15-20% of the time, while changing the associated tests take the other 80-85%. When the effort is split that drastically, people begin to ask questions.

Should I write Unit Tests? The answer at speakerconf was: Probably, but I'm interested in hearing other options.

Ayende proposed that scenario based testing was a better solution. His examples drove home the point that he was able to make large architectural refactorings without changing any tests. Unfortunately, his tests suffered from the same problems that Integration Test advocates have been dealing with for years: Long Running Tests (20 mins to run a suite!) and Poor Defect Localization (where did things go wrong?). However, despite these limitations, he's reporting success with this strategy.

In my opinion, Martin Fowler actually answered this question correctly in the original Refactoring book.
The key is to test the areas that you are most worried about going wrong. That way you get the most benefit for your testing effort.
It's a bit of a shame that sentence lives in Refactoring and not in every book written for developers beginning to test their applications. After years of trying to test everything, I stumbled upon that sentence while creating Refactoring: Ruby Edition. That one sentence changed my entire attitude on Unit Testing.

I still write Unit Tests, but I only focus on testing the parts that provide the most business value.

An example
you find yourself working on an insurance application for a company that stores it's policies by customer SSN. Your application is likely to have several validations for customer information.

The validation that ensures a SSN is 9 numeric digits is obviously very important.

The validation that the customer name is alpha-only is probably closer to the category of "nice to have". If the alpha-only name validation is broken or removed, the application will continue to function almost entirely normally. And, the most likely problem is a typo - probably not the end of the world.

It's usually easy enough to add validations, but you don't need to test every single validation. The value of each validation should be used to determine if a test is warranted.

How do I improve the maintainability of my tests? Make them more concise.

Once you've determined you should write a test, take the time to create a concise test that can be maintained. The longer the test, the more likely it is to be ignored or misunderstood by future readers.

There are several methods for creating more concise tests. My recent work is largely in Java, so my examples are Java related. I've previously written about my preferred method for creating objects in Java Unit Tests. You can also use frameworks that focus on simplicity, such as Mockito. But, the most important aspect of creating concise tests is taking a hard look at object modeling. Removing constructor and method arguments is often the easiest way to reduce the amount of noise within a test.

If you're not using Java, the advice is the same: Remove noise from your tests by improving object modeling and using frameworks that promote descriptive, concise syntax. Removing noise from tests always increases maintainability.

That's it? Yes. I find when I only test the important aspects of an application and I focus on removing noise from the tests that I do write, the maintainability issue is largely addressed. As a result the pendulum swings back towards a more even effort split between features & refactoring vs updating tests.

From http://blog.jayfields.com/

Published at DZone with permission of Jay Fields, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Jeroen Wenting replied on Thu, 2010/02/25 - 3:07am

I've often come across people who seem convinced that code that has 100% test coverage is by definition correct while any other code is by definition flawed. I've myself never subscribed to this, and after spending several months doing little else but updating unit tests to meet the code coverage requirements (the previous maintainer of the project had not bothered, now suddenly someone decided to put the module into an automated build system which rejected every build because test coverage was too low.
Even worse than wasting developer effort writing useless unit tests (testing every getter and setter for example on a DTO with 50 fields is utterly pointless and we had a lot of those) is the fact that a lot of such tests are themselves flawed, leading to large amounts of tests passing where they should fail (and some failing that should pass). Often the busines logic being tested doesn't match what should be tested because the tests are typically written by the person writing the code, if that person has a flawed understanding of what the code should do he's going to carry over that flawed assumption into his testing code leading to code to pass a test that's technically correct but fundamentally flawed.
In other cases the test itself is buggy, causing unpredictable or reproducably incorrect results. This has led me to ask (both in private and several times in public) the so far unanswered question of "who is testing the test?". Say I have a method that should return true under specific conditions but in a corner case incorrectly returns false. If I now write a test that fails to test that corner case (or someone else) the test passes when it should fail. Or more frequently the corner case may not even be tested because the person writing the test failed to consider the existence of that corner case. In other scenarios (maybe easier to catch) the code itself itself is correct but there's a flaw in the test leading it to fail in specific cases. Usually those are easy to catch but I've experienced myself scenarios where the tests were sacred therefore the flaw HAD TO BE in the code undergoing testing. The production code was therefore modified to pass the test, introducing a serious bug into the application. But it passed the test so everyone was happy. When I looked at the test code later (when customers complained about that new bug) I discovered the logic in it was completely reversed from what was actually required, causing it to always fail if the production code produced the correct result and always pass if it produced flawed results.
So should we write unit tests? Probably, if doing so makes sense for the code to be tested. Should we rely on them blindly to determine whether the code to be tested is correct? Hell, no. And we certainly shouldn't demand any arbitrary (and certainly not 100%) test coverage for a codebase to be considered "production ready".

Endre Varga replied on Thu, 2010/02/25 - 8:23am

Well, again, Unit Testing is not the silver bullet. It is still one of the nice tools to have, but in the future there will be better ideas -- which will have their own shortcomings.

Erin Garlock replied on Thu, 2010/02/25 - 8:34am

How far extreme do you want your testing?  Even if all your unit tests are correct and complete, does the code being tested really do what you want?  Not necessarily.  You would need to do a mathematical proof for that kind of assurance, and to do so for anything but the most trivial software would be effectively an insurmountable problem.
 
We must remember that testing is an insurance policy, not a guarantee.

Rogerio Liesenfeld replied on Thu, 2010/02/25 - 9:05am in response to: Jeroen Wenting

I fully agree that trying to achieve 100% code coverage for an entire codebase is not only unrealistic but usually foolish (unless it is something like missile control software). On the other hand, achieving 100% code coverage for a specific, focused, and important *part* of the codebase is both realistic (or should be) and wise. For example, it's a good idea to spend extra effort in testing the most important business methods, even to the point of achieving 100% path coverage. Extending that level of coverage to the whole codebase (or even the whole component) is obviously a whole different game.

Rogerio Liesenfeld replied on Thu, 2010/02/25 - 9:20am

On the topic of making tests more concise, I recently made an experiment using two different mocking APIs: Mockito and JMockit (which I develop).

I wrote several pairs of equivalent tests for the same production code, one using Mockito and another using JMockit. For comparison, I counted the number of uses of each API in each test method (where a "use" is a call to a method or a constructor exposed by the API, or the application of an annotation provided by the API). This should provide a better metric than simply counting the number of lines of code in each test method.

The JMockit API is designed to be minimal, so the results were no surprise. Full code, with each "uses count", is available here: http://code.google.com/p/jmockit/source/browse/trunk/samples/mockito/test/org/mockitousage

Johannes Brodwall replied on Thu, 2010/02/25 - 3:57pm

My current code has an extensive unit test suite that run very close to the code. This gives speed and error locality, like the original poster describes. We have occasionally, but not very often experienced that the tests may retard our ability to refactor interfaces.

Some good experiences:

  • It's seldom much work if tests that are short, fast and expressive fail because of a refactoring. (Brittleness is okay if the test is easy to understand and quick to run)
  • It is okay to delete tests that are no longer relevant after the interface change.
  • Code that is referred to by XML or other config files is an order of magnitude more expensive to modify.
  • Creating some stable points in your application (e.g. a shared/generic DAO interface or a common Controller interface) can give you a point of testing that is isolated from a lot of refactorings, while the tests still run fast and pinpoint the failure more precisely than integration tests.

With this in mind, we find that with even the most major API changes, less than 1/3 of our time is spent fixing brittle tests. About the same amount is spent fixing bugs in the new API design that the tests pointed out.

YMMV

Jan Kotek replied on Sat, 2010/02/27 - 7:08pm

> However, when the interface does change, updating the associated tests is often the vast majority of the effort. Additionally, if a refactoring changes the interaction between two or more classes, the vast majority of the time is spent fixing tests, for several classes.

WTF? And how you are actually VALIDATE that your cowboy refactoring was correct and did not introduce any bugs? Or you leave it to customer and programmer who came after you?

I have tuns of test, some of them runs for DAYS. And it really pays of on nearly bug less code. 

Thomas Mauch replied on Sun, 2010/02/28 - 7:29pm

> In my experience, making the interface or interaction change often takes 15-20% of the time, while changing the associated tests take the other 80-85%. When the effort is split that drastically, people begin to ask questions. 

I totally agree with your statement: especially maintaining tests is just to cumbersome.

It think this is also one of the reasons why we still have a lot of software without the right number of tests: when the software is still unstable, you don't want to pay the penalty of the additional overhead of having to maintain tests. And when the software is finally stable, time and money is exhausted and you still have no tests.

Therefore we must find a way to reduce the overhead associated with writing and maintainig tests, so you can easily add them at any stage of software development.

I believe that this can be achieved by using a more visual approach for testing, so you have to write - and therefore also to maintain - less testing code.

I already introduced this concept called MagicTest in an article on Javalobby. There is also a 3 minute screencast showing the use of MagicTest.  

 

Thomas Mauch replied on Sun, 2010/02/28 - 7:36pm

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.