To Err Is Human, to Catch an Error Shows Expertise
Experts are OK at avoiding mistakes, but they’re even better at recognizing and fixing mistakes.
If you ask an elementary students something like, “How can we know that the answer to this problem cannot be 8,769?” they might only be able to say, “Because the correct answer is 8,760.” That is, the only tool they have for checking a result is to compare it to a known quantity. A more sophisticated student might be able to say something like, “We know the result must be even” or “The result cannot be more than 8,764″ for reasons that come from context.
Experienced programmers write fewer bugs than rookies. But more importantly, the bugs they do write don’t live as long. The experienced programmer may realize almost immediately that a piece of code cannot be right, while the rookie may not realize there’s a problem until the output is obviously wrong. The more experienced programmer might notice that the vertical alignment of the code looks wrong, or that incompatible pieces are being used together, or even that the code “smells wrong” for reasons he or she can’t articulate.
An engineer might know that an answer is wrong because it has the wrong magnitude or the wrong dimensions. Or maybe a result violates a conservation law. Or maybe the engineer thinks “That’s not the way things are done, and I bet there’s a good reason why.”
“Be more careful” only goes so far. Experts are not that much more careful than novices. We can only lower our rate of mistakes so much. There’s much more potential for being able to recognize mistakes than to prevent them.
A major step in maturing as a programmer is accepting the fact that you’re going to make mistakes fairly often. Maybe you’ll introduce a bug for every 10 lines of code, at least one for every 100 lines. (Rookies write more bugs than this but think they write fewer.) Once you accept this, you begin to ask how you can write code to make bugs stand out. Given two approaches, which is more likely to fail visibly if is it’s wrong? How can I write code so that logic errors are more likely to show up as compile errors? How am I going to debug this when it breaks?
Theory pays off in the long run. Abstractions that students dismiss as impractical probably are impractical at first. But, in the long run, these abstractions may prevent or catch errors. I’ve come to see the practicality in many things that I used to dismiss as pedantic: dimensional analysis, tensor properties, strongly typed programming languages, category theory, etc. A few weeks into a chemistry class, I learned the value of dimensional analysis. It has taken me much longer to appreciate category theory.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)