DevOps Zone is brought to you in partnership with:

Henrik Warne is a software developer in Stockholm, Sweden. He has been programming professionally for more than 20 years. Henrik is a DZone MVB and is not an employee of DZone and has posted 16 posts at DZone. You can read more from them at their website. View Full User Profile

5 Unit Testing Mistakes

02.20.2014
| 13872 views |
  • submit to reddit

When I first heard about unit testing using a framework like JUnit, I thought it was such a simple and powerful concept. Instead of ad hoc testing, you save your tests, and they can be run as often as you like. In my mind, the concept didn’t leave much room for misunderstanding. However, over the years I have seen several ways of using unit tests that I think are more or less wrong. Here are 5, in order of importance:

1. Testing algorithms together with coordinators. Algorithmic logic is easiest to test if it is separated from coordination code (see Selective Unit Testing – Costs and Benefits). Otherwise you end up with tests where you for example first have to submit a job through a job queue before the logic is tested. The job queue part only complicates things. Unless you are testing the job queue itself, break out the logic that would be executed when calling the run method, and test that logic separately. Both the code and the tests become much easier to write and manage that way.

2. Mocking too much. Perhaps the greatest benefit of unit tests is that they force you to write code that can be tested in isolation. In other words, your code becomes modular. When you mock the whole world around your objects, there is nothing that forces you to separate the parts. You end up with code where you can’t create anything in isolation – it is all tangled together. From a recent tweet by Bill Wake:  ”It’s ironic – the more powerful the mocking framework, the less pressure you feel to improve your design.”

3. Not using asserts. Sometimes I see tests where an object is created, some methods are called, and that’s it. Maybe it is done in a loop, with some variation in creation or calling. However, nothing is ever checked using asserts. That misses the whole point – checking that the code behaves as expected. Sure, the code is run, but that’s it. If an exception is thrown, we would notice, but nothing else is verified.

4. Leaving print statements in the tests. I see this as a remnant from manual testing – you look at the values and decide if they are correct or not. But all checking should be done using asserts. If an assert fails, you will see it, because the test fails. When the test passes, nothing should be printed. Sometimes when developing the tests, it can be useful with print statements. But in that case add a flag, and turn printing off when checking in the tests.

5. Checking the log statements, not the result. Thankfully not common, but I have seen an otherwise very competent developer do this. Since it is the result of the method that matters, not what is printed in log, there can be errors in the code, and the tests still pass. Enough said.

The last 3 problems are all easy to avoid. The first 2 require more effort, but will result in code that is nicely separated. Happy unit testing!

Published at DZone with permission of Henrik Warne, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

David Karr replied on Fri, 2014/02/21 - 10:10am

Related to mocking frameworks, another reason why "mocking too much" can be a problem is when you have to "graduate" from something like Mockito to PowerMock in order to mock an unusual construction.  The latter is more complicated and a little harder to figure out. Unfortunately, developers sometimes have no choice, as some legacy frameworks used in applications weren't written with convenient unit testing in mind, requiring strong mocking fu.

Also, in the theme of this list, another common mistake is trying to assert (or "verify" with mocking frameworks) too much.  You should assert or verify the critical business logic, not irrelevant side effects.

Oleksandr Alesinskyy replied on Wed, 2014/02/26 - 6:16am in response to: David Karr

"You should assert or verify the critical business logic, not irrelevant side effects." - if and only if you are 100% sure that changes in these side-effects would not break your application earlier or later. And even if you are sure - you are, the most likely, mistaken.


Nico Coetzee replied on Wed, 2014/02/26 - 8:43am

Instead of print statements I set breakpoints and run the unit test in debug (from Eclipse). Took me a while to drop the print habit but the breakpoint type tracing is turning out much more valuable as you can drill down into objects in scope.

Manassés Souza replied on Fri, 2014/02/28 - 7:31am

Complementing a good practice on the asserting issues, the use of comments about why the assert fails helps the developer. All assert methods on JUnit supports a String as its first parameter for this purpose.

About the mocking framework and what David Karr exposed, I understood that when you have not to deal with strong mocking, it's better to choose Mockito. Do not use a framework because someone sad it's better than another. If you are starting a new project, choose Mockito and use it whith moderation.

Dan Wiebe replied on Sat, 2014/03/08 - 8:32am in response to: Oleksandr Alesinskyy

I agree with David, although I wouldn't call them "side effects," since that term is more commonly used in another context.

Each test should test one thing, and should ideally have one assert.  If you assert the same thing in many tests, then if you have to change your algorithm or  your architecture, many many tests will fail all at once--every test that asserts anything about what changed.

Not only is that demoralizing and disheartening and an incentive to hack rather than refactor, it means that fixing the tests will become a mindless, mechanical process, and you'll stop thinking about it...and probably you'll end up changing an expected to fit the actual somewhere where you shouldn't, and your test suite will start acting to lock defects in rather than out.

Constantly refactor and normalize your code so that every decision is made only in one place, and everything that needs to know about that decision calls into that one place.  Separate your concerns appropriately, and you'll never have to worry about peripheral issues causing untested conditions that can break your code.

Dan Wiebe replied on Sat, 2014/03/08 - 8:36am in response to: Nico Coetzee

The debugger is certainly better than logging in tests, but if you have to use the debugger to find out why your tests are failing, your tests are probably too big (unless they're integration or story tests).  Once you get them passing, go looking for single-responsibility violations in your code and see if you can't separate concerns so that you can make your tests smaller and more numerous, so that when they fail the reason is obvious.

Either debugging or putting logs in your tests is a signal that the code is trying to tell you something.  Slowing down and listening to it could be very profitable.

Oleksandr Alesinskyy replied on Mon, 2014/03/10 - 2:16pm in response to: Dan Wiebe

 That"s true that you should try to create a code without side effects - but it is not a reason to not verify such side effects if they for some reason exist. 

BTW, in many cases "side effects" are either unavoidable or even welcome - may you ever imagine a fluent API not using side-effects?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.