Rob Williams is a probabilistic Lean coder of Java and Objective-C. Rob is a DZone MVB and is not an employee of DZone and has posted 170 posts at DZone. You can read more from them at their website. View Full User Profile

Getting Setup to use JSFUnit

12.11.2008
| 8029 views |
  • submit to reddit
Per my earlier post about the joy of JUnit Plugin Tests, having tests that actually extend all the way out to the edges of the app is life-changing. Wait, let me revise that, because I am sure that all the people out there who have suffered through Cactus et al are shaking their heads saying ‘der, I have been doing that a long time.‘ In eclipse PDE land, you write a test and you run a test. So my revised Putin‘s submarine flag-planting is: TDD that extends out to the edge of the app is life-changing.

Over the weekend, I was able to confirm that there really is nothing comparable for just JSF dev. After looking around again, I decided to use JSFUnit. There is a new beta release of it, which looks great. Running the tests, however, is a trip back to the same ugly dumpster dive anyone whose tried to do such stuff has probably forcefully repressed. You know, you have to figure out how you are going to get the container to start. JSFUnit has documented ways to be integrated with Maven, but the maven plugin in eclipse can't easily map a custom goal to a run operation. You would think you'd be able to just define a run configuration that you could pass the test information to. You'd be wrong. Things are further complicated by the fact that when you are working along, to get your war for testing, you have to publish. This seems like something WTP should have taken care of, but 5 years on, they are still struggling with validation and code completion.

Thought about using Cargo. Was going to do a JUnit 4 test that had a suite inside, and then use BeforeClass and AfterClass to start/stop the server. But before even getting a chance to have that fun, just getting JSFUnit integrated was bloody hell. Why? The same stupid Java reasons that have been around forever. Old versions of various common libs getting entangled. Is there really no way to curb this nonsense?? First, we found that JUEL included the EL classes. (At this juncture, I started to think that familiar thought: jesus, we moved everything to Maven, but the process of assembling the classes we need is still a nightmarish mess. Why? Because there is a faulty premise at the core of the declarative process: that people won‘t be stupid. These guys are publishing to Maven, but including the classes in their jar. How much time would it have taken them to declare it as a dependency? Isn‘t this the cooperative multitasking nightmare all over again? Well-meaning meatheaded liberal design is probably the real problem here. As is always the case in such circumstances, the designer of said things doesn‘t see the inability to hold water as a shortcoming: they paddle into the ptolemaic epicyclic stream doing backstroke and whistling dixie. Witness Maven: let‘s make a tool for crawling up and down the dependency tree. See, what‘ll be cool then, is they can become the small town dick and pat themselves on the back when they crack each such stupid case. (Of course, this all presupposes a blind sponsor, cause surely someone who was paying for the escapade/charade of undirting the drawers of distantly related libs. (This is the Quixotesque conceit of so much ‘programming‘ it‘s really kind of a wonder anything gets done.

Lean would of course show these detours as off the VSM.

SO in the spirit of following up deconstruction with something constructive, maybes:

  1. While slashing through this, looked at some shell scripts that grep jar contents. Kinda cool. Of course they are all based on find/exec variants (tried for 10m to get Textism to allow me to show the actual command). I didn‘t know that you can pipe the results of said again and go straight to a grep. I also didn‘t know that you can do grep ‘find . -name *.jar‘. Finally, there were a bunch of versions that do the find and throw it into a for, which has the benefit of being able to show what files were being looked into. Of course, turned out the one I was looking at was Bourne. Will probably go back to this.
  2. I still like the idea of writing a crawler that spiders into open source projects and scores their potential for mayhem. Think about how stupid it is that someone just sloughs scrud out into a public repository and it blows up on some number of the unsuspecting users.
  3. I love how people talk all the time about how classloaders are the part of Java that separates the chillun from the grown folk (probably half or more of whom would fail a basic design patterns test), and yet here we are admitting that the thing is so stupid that it will just load up stuff and let the whole house burn down. That would be bad enough, but isn‘t there a certain law of traceability that might apply here?

Ever heard the claim that 80% of the mayhem in the world can be disbursed by merely turning the lights on? The cover in this case is simple: the ignition is combinatorial. Meaning because there is no one scenario that every passer on a given single bridge falls through, the land mines are never dug up. Trust but verify. Maybe the Maven guys should think about a simple idea: cert things before they are allowed into the repo.

From http://www.jroller.com/robwilliams

Published at DZone with permission of Rob Williams, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)