Alex Miller lives in St. Louis. He writes code for a living and currently work for Terracotta Tech on the Terracotta open-source Java clustering product. Prior to Terracotta he worked at BEA Systems and was Chief Architect at MetaMatrix. His main language for the last decade has been Java, although Alex have been paid to program in several languages over the years (C++, Python, Pascal, etc). Alex has posted 43 posts at DZone. You can read more from them at their website. View Full User Profile

Practical Unit Testing

02.22.2008
| 8757 views |
  • submit to reddit

I was reading Howard's post today on his crisis of faith with respect to unit testing and it launched an interesting conversation with a colleague. I'm a unit testing advocate. But I'm not a big fan of test-first or test-driven development (TDD). And I'm not a believer in 100% coverage.

I believe that my job is to maximize the value of my time with respect to the product. Of course, how to do that is a big value judgement and one that's likely impossible to actually calculate. I'm generally of the belief that writing tests pays large dividends over time. But writing tests for trivial code (I'm talking about you getters!) is likely not going to pay a big dividend. The time spent writing the test (and maintaining that test) is probably not worth the time.

With respect to TDD, I think the thing that bugs me the most about it is the dogmatism that the only way to build a system is to first write a test. I think that's certainly one way to build a system, but certainly not the only one. I find it to work well to start with tests and API when I understand the concepts in the system well.

But when I don't (which is maybe the more common case), I find it more useful to skip the tests and start with code. Often I will rapidly prototype the skeleton of the system and do several quick rewrites. Usually that's enough to work out the concepts. At that point, I definitely like to start working out the API and writing tests. But before then, it feels like an anchor rather than a safety net.

TDD advocates would tell me to let the tests guide the development of the API and refactor as I go but for me, I just find this annoying and feel that it breaks the flow. But maybe that's just me. I'd be interested to hear what you think.

As Howard mentioned in his blog, there is a lot of value to be had in integration tests as well, and I'm a big fan of having both unit and integration tests to cover the same code base. Or rather, my favorite is to have unit tests, component level tests, and system level integration tests. These three levels weave together to give you a really powerful safety net for your code.

Published at DZone with permission of its author, Alex Miller.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Henry Brown replied on Sat, 2008/02/23 - 5:09am

I agree that the value of prototyping or writing code simply to test your initial ideas or assumptions should not be ignored. However, I find that it helps me to stick to the actual parts I want to prototype by writing these peices of codes in the form of tests. This often prevents me from having to fill in those details that I actually do not want to prototype simply because they are required for the parts I do want to prototype. So for me, writing a test first even when I'm not sure of how I will eventually proceed I find it still helps to keep me focussed by writing tests first.

Guido Amabili replied on Sat, 2008/02/23 - 9:54am

Hi Alex,

I work more or less the same way you do.
I do extensive design(by extensive I mean I have class diagrams and algorithms laid out) before touching any keyboard,
start to write the domain classes and let my IDE create automatically unit tests classes.
From there on it's tdd; but I also am not a big fan of testing every single line of code.

Cheers, GuidoLx

 

Artur Biesiadowski replied on Sat, 2008/02/23 - 10:43am

While I understand the power of unit testing (and generally automated testing), there are some areas which are not easily testable. In my work, we have had quite a few failures recently, plus few smaller bugs happening. For this pieces of system, we don't have ANY automated tests. Somebody could say - oh, you don't do unit testing, no wonders your application has failed. Here are the reasons for failure:

- network router (3 hops from our server) got overloaded by application from another department, causing our bandwidth to go down to 1%; we had dual network cards, duplicated routing etc, but same was true for the applications which killed our network (it was able to bring down both links at once)

- SAN storage from very respectable company is failing; obviously we have full RAID/mirroring etc, but something in SAN controllers is failing - it was officially acknowledges by provider it is their fault and we cannot do anything with that, excep replacing entire data center setup

- jvm is sometimes crashing in gc (once per few weeks under full load)

- gui frontend is not repainting properly one of the windows if contents of the table are changing too often (5000+ times per second), reproductible once per day or less

- race condition in server code, when more than 60000 entries got requested separately from other 20 servers in the same second, causing few of the requests to be not answered

Every of these bugs (except gui one) costs us some outage. None of them could be spotted by any automated testing written beforehand. Now, when we know about last bug (race condition), we could probably find 20 servers on development to try to reproduce it (which is not easy, as there are other applications we are dependent on, which have only 5% load visible in development due to licensing/legal reasons), but we can as well fix it outright.

Now, I can go to my manager and propose him writing full tests. It will probably increase the cost of development twice. No, it will not help to spot any problems we had recently. No, it will not help to spot any bugs we had in the past (only production-affecting bugs we have are deadlocks/race conditions visible under very heavy load in almost non-reproductible scenarios). Yes, it might help with avoiding regressions in the code, but there is no guarantee that regression will cause code to fail in exactly same combination of threads.

So far, our only solution is code review and proofing. One guy is writing the code and does preliminary tests, then two people are sitting and trying to prove that code can fail under specific conditions. We have java 5 memory model next to us and try to find all the cases where non-visibility of some update to other threads can cause some problems (as our application is quite performance sensitive and runs on 16 cores, we cannot just afford global locks solving the problem for us). I don't see a chance where writing unit-like test could expose the memory-model, multithreaded bug which may not be happening at all on given architecture.

Yes, we live in constant fear that our latest change can affect existing system in bad way. But with half million lines of code which is heavily multithreaded and distributed, I still think that code review >> automated testing.

Said that, we obviously have some unit tests for the things like our extended collection classes, small computation modules etc - all the trivial cases. But it is not the trivial code where the bugs are happening in our case.

Alex Miller replied on Sun, 2008/02/24 - 2:43pm

Artur,

It sounds like you might benefit from some static or dynamic analysis tools for checking your concurrency issues.  FindBugs is a great first pass to find inconsistent locking and other similar problems.  Then I would use a profiling thread analyzer like JProbe, OptimizeIt, or YourKit.  These tools do a pretty good job at finding data races, deadlocks, and monitor contention issues.  If you haven't tried them yet, I think they'd be worth the effort.

It sounds to me like unit tests would not provide a lot of additional value so I wouldn't choose to invest my time there either.  Seems like review and analysis tools are a better value.

Artur Biesiadowski replied on Sun, 2008/02/24 - 3:36pm in response to: Alex Miller

We are using FindBugs since few months and indeed, it has found few issues in normally not executed paths of code (which is very good, as problems would wait there like a bomb to explode in least fortunate moment). For profiling, we are using JProfiler. I haven't got a lot of luck with thread analyser inside it, maybe I should take a closer look at it.

Thanks for the hints.

JC Sa replied on Sat, 2009/03/21 - 2:08pm in response to: Alex Miller

The lastest version of JProbe has some very good packaged features - memory analysis, performance analysis and code coverage I believe. Great thread analysis. They are the only one I have found that have a real Eclipse plug-in too. You can test drive there tools. I read a Whitepaper that discussed a CPM Toolkit that sounds very useful as well.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.