Alex is a Software Engineer working on Android development tools, especially Android Studio, at Google. His interests include Java, API design, OOP, IDEs and testing. Alex spends some of his spare time working on Open Source, blogging, writing technical articles, and speaking at international conferences. The opinions expressed here represent his own and not those of his employer. Alex is a DZone MVB and is not an employee of DZone and has posted 49 posts at DZone. You can read more from them at their website. View Full User Profile

Thoughts about "UI Test Automation Tools are Snake Oil"

  • submit to reddit

I just finished reading "UI Test Automation Tools are Snake Oil" by Michael Feathers. Although I agree with many of the ideas in the article, I also think it contains some hasty generalizations and misplaced blame.

Mr. Feathers points out that “selling UI test automation tools is irresponsible” and these tools are sold “with a dream, a very seductive and dangerous one.” The scenario that he refers to, to my understanding, is the one where tool vendors sell their (expensive) tools promising:

  • “click and type,” quick test generation with a record/playback tool, no coding required
  • test the whole application, including all different use case and data permutations, using this tool

This can be easily interpreted as “with my tool, you will have a comprehensive test suite in no time, and you can hire cheap monkeys to do all the testing.” It is indeed a very seductive promise: spend some money and get what you want fast and with minimal effort, just like “weight loss, without diet and exercise.”

I completely agree with Mr. Feathers that selling something based on a fallacy is irresponsible. At the same time, I also think this is the point where blame is misplaced. Instead of blaming the tool, blame should be shared between tool vendors and the people buying the tool, as long as the business transaction is based on the “seductive dream.” IMHO, this is a people problem: on one hand we have vendors taking advantage of the customers’ lack of knowledge and the customers, that did not do their proper homework on time and by now have a big, untestable mess.

Another good point that Mr. Feathers makes is that UI testing tools are brittle. Selling a UI testing tool promising that the generated test suite will never break is also careless. The fact is, when testing UIs there are many external elements that can introduce false failures. IMHO, it is impossible to fix all of them, but at least, the tool vendor should warn their customers about these limitations and offer ways to overcome them (more on that later.)

I also agree with Mr. Feathers that record/playback doesn’t work. I wish he went into more details to see if we agree based on the same premises. In my opinion, the major weakness of existing record/playback tools is expensive maintenance of the generated tests. Recorded scripts are often long and written in proprietary languages lacking object-oriented features. Modularization/refactoring is hard or impossible, resulting in duplicate test code. The common end result is that changes in the application requires re-recording all tests scenarios, a labor-intensive and error-prone task.

Not everything is black and white. UI test automation tools still have a place. Not every shop has the resources or time to rewrite an application, just to make it more testable. I’ve seen applications with massive code bases, that were not written with testability in mind. Extending or refactoring is scary due to the lack of the safety net that automated tests provide. The only way to test them is through the UI. UI testing can provide the initial safety net, which later on can be enhanced (or replaced) with API-based unit tests.

In his post, Mr. Feathers asks “Developing them open-source? Well, let your conscience be your guide.” I don’t sell a UI test automation tool per se. What I provide is an open source, API-based UI testing library, called FEST (you can find some testimonials here.) I think I’ve been pretty responsible by pointing out that UI testing is fragile and offering ways to overcome this limitation. What I’m missing, and I have to thank Mr. Feathers for it, is a guide on how to test the UI layer in isolation (which I will be doing soon.)

Like I mentioned in a previous post, I’m currently working on my spare time on a playback/recording tool. My intention is to create a tool that generates clean Java code, as if it was hand-crafted, to overcome to what I think are the problems with this technique. We’ll see if I can accomplish this ambitious goal :)

Overall, I liked and enjoyed Mr. Feathers article. It has pretty good points and observations. I wish he backed up his ideas with more details and examples, to sound less emotional and more objective. The problem is in the human factor, not in the tool. Ideally, vendors (in general) should set the right expectations about their products, and customers should have enough knowledge to avoid being taken advantage of. At least, some of us are trying to do the right thing :)

Feedback is always welcome :)


Published at DZone with permission of Alex Ruiz, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)



Jan-kees Van Andel replied on Tue, 2010/01/19 - 6:32am

Like all tools, UI test automation tools have their place in the software development lifecycle.

UI testing is very useful for application parts that are hard/impossible to unit test. Examples are JSP's, JavaScript and navigation. If you use a tool like Selenium to test this part, you'll at least know that there are no big errors there, for example: JSP compile errors because of a forgotten end tag. However, this can also be tested by precompiling your JSP's. Or some piece of logic in a JSP that causes a NPE when invoked. This cannot be tested by precompilation and must be tested on a running server.

However, as said, web tests (like all UI layer tests) are brittle. Change one ID or change the structure of your HTML and it will fail. In some cases this is the desired behavior, in some cases it's not. It's the job of the scripter to write proper scripts that, for example, don't rely on dynamic content or other volatile stuff.

Another issue with web tests is that they often require an online backend, making them slow and even more brittle.

Partly because of the above, web tests are often very shallow. If you open a list with accounts, you cannot assert the correctness of the returned accounts. You'll probably be only able to test that IF a list of accounts is returned THEN it better have the correct format.

For this reason, you need to rely on other types of test for deeper checks. Unit tests are the way to go here.

Apart from that, any self respecting IT organization need code reviews, quality metrics and tooling (e.g. FindBugs, Checkstyle, Fortify...), functional testing, system testing, (automatic) performance testing, etc.

Long story short, web testing definitely has its use, but like every tool/method, it only solves a part of the problem.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.