DevOps Zone is brought to you in partnership with:

Gil Zilberfeld has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. With more than twenty years of developing commercial software, he has vast experience in software methodology and practices. Gil is an agile consultant, applying agile principles over the last decade. From automated testing to exploratory testing, design practices to team collaboration, scrum to kanban, and lean startup methods – he’s done it all. He is still learning from his successes and failures. Gil speaks frequently in international conferences about unit testing, TDD, agile practices and communication. He is the author of "Everyday Unit Testing", blogs at http://www.gilzilberfeld.com and in his spare time he shoots zombies, for fun. Gil is a DZone MVB and is not an employee of DZone and has posted 63 posts at DZone. You can read more from them at their website. View Full User Profile

Test Attribute #5 – Differentiation

07.25.2014
| 4794 views |
  • submit to reddit
This the 5th post about Test Attributes that started off with celebrity-level “How to test your tests” post.

Differentiation is not an attribute of a single test. Differentiation does not ride alone, because it requires multiple tests.

Tests allow us to (a) know something is wrong and (b) help us locate that something. We want to plant lots of clues for our future us (or another code victim of ours) who will need to analyze and solve the problem. For us to do this, and I really hate doing it, I’ll raise the ghost of our fallen enemy: Waterfall.

Years ago, when I visited water-world, I use to write SDDs. These were the dreaded Software Detailed Design documents, and we wrote them for both features and components. Of course, nobody liked them, their templates, the weird words in the beginning and they even smelled funny. But…
They had one thing going for them: In order to write one, you had to think about what you’re going to do. (Sounds like the biggest benefit of TDD, right?). Once we reviewed the documents, it was a good starting point to ask “what if” questions. What happens in the case of disconnect? What if the component is not initialized in time?
As part of our learning, at one point we even added a test-case description to the doc, so the writer needed to also think up front of all the cases he needed to check, and we could review those too. The list also served as a check list for the implementer to test.

Back to the future


That’s right, waterfall was evil, but sometimes had some good parts in its heart. We usually give BDUF (big design up front) a bad rep, but really, it’s the effort in documentation that bothers us, not the thinking up front. Scientists have proven that thinking about something before doing it correlates to its success. Imagine that.
TDD tells us to focus on the current test. The hardcore guys take that to the extreme, but in fact, it’s really hard to do.
While we’re doing one scenario, we’re still thinking about the other “what ifs”. If we’re not doing TDD, and writing code first, as we code we’re thinking about those “what ifs”.
And we should embrace the way we think, and make the most of it.

Baking Differentiation In


We’re already doing the thinking about the scenarios, and what makes them different from each other. All we have to do now is to make sure we leave the breadcrumb trail of our thoughts.
  • Group the test cases. Put all relating cases in one place, and separate from others. Put them in a separate class/file and give it a distinct group name. Yes, even if it there are other tests for that method – remember, convention should help us be effective, not restrict us because it’s there.
  • Review the test names as a group. First, look for missing cases, and if there are - write tests for them. Review the names in the group individually and see if they complement each other. If the names overlap, move the distinction to the left, so you can differentiate between them if the test runner does not show the entire name.
  • Review the test body. Sometimes, we “cover” the code as part of the setup for the test, and what differentiates are actual settings that differ between tests. Make the tests reflect that: separate the common setup lines from the differentiation setting and action. You can also try (but may not always succeed) to extract a common setup, and have the remaining, distinct lines remain in the test.
  • Review the covered code. You may even leave hints in the code itself, by matching names of variables and functions in the tested code to naming used in the test. However, much like stale comments, this can go bad, if things don’t get updated when refactoring. Use at your own risk.
In order to analyze a problem when tests fail, we need to get into detective mode. The more evidence we have, the better. With enough differentiation, we can get a mental model of what works and what doesn’t, and better – where the problem might lurk, so we can go on and fix it.
Published at DZone with permission of Gil Zilberfeld, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)