John is an experienced consultant specialising in Enterprise Java, Web Development, and Open Source technologies, currently based in Sydney, Australia. Well known in the Java community for his many published articles, and as author of Java Power Tools and Jenkins: The Definitive Guide, and founder of the open source Thucydides Automated Acceptance Test Library project, John helps organisations to optimize their Java development processes and infrastructures and provides training and mentoring in agile development, automated testing practices, continuous integration and delivery, and open source technologies in general. John is the CEO of Wakaleo Consulting, and runs several Training Courses on open source Java development tools and best practices. John is a DZone MVB and is not an employee of DZone and has posted 123 posts at DZone. You can read more from them at their website. View Full User Profile

Background Unit Testing: New Evolutions in Unit Testing and IDE Integration

02.02.2009
| 7488 views |
  • submit to reddit

An emerging innovation in unit testing is the idea of Continuous Unit Testing, or having your unit tests run in the background whenever you modify your code. In this approach, whenever you save your code, the appropriate unit tests are executed in the background. This avoids the problem of committing changes with broken tests just because you forgot to run the appropriate tests before committing. The trick, of course, is knowing what the appropriate tests are - you don't want have to wait for all your your tests to run every time you save a change. You want to focus on the tests that are most likely to be affected by your code changes. I'm sure this sort of thing will be a standard IDE feature in a couple of years.

For now, however, it isn't, and you need to resort to third-party tools. I am aware of two emerging tools that try to achieve this. The first is JUnitMax, from Kent Beck, one of the pioneers of XP and TDD. The second is Infinitest. Infinitest is free and open source, whereas JUnitMax is available for a modest subscription fee ($2US per month at this stage). I don't think this is a big deal, though it may well put some people off from trying, which would be a shame. Both tools are pretty green, with little in the way of documentation (there is a decent introduction to JUnixMax here).

Infinitest is interesting in that it is supposed to work both with Eclipse and IntelliJ, whereas JUnitMax is purely an Eclipse plugin. Infinitest is actually a separate Java application, that you run from within Eclipse via a Java Application run configuration. The integration with the IDE is therefore far from seamless, though there is a step-by-step tutorial of what you are supposed to do in Eclipse. Unfortunately, when I ran it on a module in my multi-module Maven project, it didn't manage to find any unit tests to run.

JUnitMax comes as an Eclipse plugin, so installing it into Eclipse is more straight-forward. To try it out, I installed the plug-in and made a few trivial changes to one of my unit tests. When I saved the test case, sure enough, JUnitMax kicked off some unit tests in the background, and found an error that I wasn't expecting! JUnixMax runs discreetly in the background, and only makes a fuss if it discovers any failing tests. In this case, unit test errors appear like compilation errors, in the margin of the source code. This is very cool - unit test failures are considered on the same level as compilation errors in the IDE, which is a great visibility boost. It is also a great productivity booster - you don't need to remember to run your unit tests after each change, and you don't have to wait (and possibly be side-tracked) while your unit tests are running.

junitmax-error.png

As you would expect, the failing unit tests are also flagged in the project view, so you are less likely to miss test failures, even in unexpected places:

junitmax-error2.png

Ensuring that your unit tests stay up to scratch is great, but what is really useful is the ability to run the unit tests relating to a particular application class whenever you change the class. So I modified an application class, introducing an error. Sure enough, within a few seconds, a red marker appeared on the corresponding unit test class. As I mentioned earlier, the tricky thing here is knowing what tests to run, so that you can know about failed test as fast as possible.

JUnitMax uses some clever heuristics to guess which tests need to be run first, and to run the fastest tests first. At the end of the day, all of your unit tests are executed in the background whenever you make a change. This is an approximate process, but it seems to work OK. A more accurate technique would require code coverage metrics that back-track to figure out what unit tests run each line of code. This is much harder. Clover can do this, but not in real time. If you integrate Clover with Maven, for example, you can get Maven to run a sub-set of the unit tests based on what code has been changed and what tests execute that code. This is an excellent way to speed up the build process, but it isn't at the stage where you can integrate it smoothly into your IDE yet, as you do with JUnitMax. A perfect JUnitMax would combine these two technologies, so that only the relevant tests are run each time. But, for now, it's a promising start.

From http://weblogs.java.net/blog/johnsmart

Published at DZone with permission of John Ferguson Smart, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Steven Baker replied on Mon, 2009/02/02 - 5:42am

although i like the idea, it seems a little too prone to error (choosing wrong tests) which could give dev's a false feeling that everything is okay, when it isnt.

typically, in my experience, a strict process of running all tests before check-in works the best. the kind where if you brake the build, and it's because you didnt run your tests, you have the back of your head slapped.

Artur Biesiadowski replied on Mon, 2009/02/02 - 10:55am in response to: Steven Baker

>typically, in my experience, a strict process of running all tests before check-in works the best.

90% of build-breaking issues I have experienced were due to missing checkins. We were using Clearcase with not-so-perfect eclipse plugin and non-trivial paths configuration, so it was quite easy to forget checking in some directory (if directory was a link, 'checked-out' flag was not visible and in clearcase if you don't checking the dir, others don't see files which were checked in inside). Running obligatory unit tests on developer side would not help at all. Only thing which helps is continious build on server side, from repository sources. 

Has anybody done that? Some kind of commit mechanism which would actually accept the commit, but 'publish' it only after tests are done ? (at least after it compiles successfully). Could be especially tricky, if compilation/test takes long enough that commit from somebody else comes in a way, or you do another commit on top of it... Probably some kind of git/mercurial thingy with multi level repositories, including per-developer repository from which tests are done... anybody done that with success ?

John Ferguson Smart replied on Mon, 2009/02/02 - 1:42pm

This is a tricky one. Teamcity and Pulse have the idea of "personal builds", but the implementation is a tad too intrusive for my liking (commits can only be done from the IDE and must go through the TeamCity server, for example). A more generic approach that I am investigating myself is to use development and integration branches in Subversion, along with the Subversion 1.5 merge features to "promote" changes automatically from the development branch to the integration branch if (and only if) the build succeeds. Still fairly experimental, though. 

Chris Wilkes replied on Mon, 2009/02/02 - 2:20pm

Link to Infitetest is bad, I'm guessing this is it: http://code.google.com/p/infinitest/

John Ferguson Smart replied on Mon, 2009/02/02 - 6:02pm in response to: Steven Baker

True, running all the tests before check-in is a recommended practice. But, from what I understand, JUnitMax does run all of the tests, it just runs the ones it thinks are the most relevent first. 

Michał Jankowski replied on Tue, 2009/02/03 - 4:02am

And if you have a big project this would never work. In my company we even have to turn the checkstyle off as Eclipse runs out of memory fast (2 Gb assigned). I think this kind of tests would stuck it even more.

Parag Shah replied on Thu, 2009/02/05 - 2:02am

Nice article. Continuous unit testing is an intersting idea, but I am not very sure about how well it will scale up for large projects. Also if we are disciplined enough to run unit tests before every checkin thenwe may not really need continuos testing happening in the background.

But having said this, it might make sense to have a small subset of tests (maybe tests which are for the core of the software) run continuosly for every save.

 Just my thoughts...

 --

Regards

Parag

hookfi john replied on Sun, 2009/05/31 - 7:49am

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.