I like software - reading, tinkering, designing, coding. I have been doing so for 20 years or so and I would not mind continuing this for foreseeable future. Fortunately for me, this is my profession as well and I have managed to get paid for this for some 14 years now. Although I do not have any strong bias for any business domain, I have been working with some pretty big names in the finance domain and you might get a hint of that from my entries. Partha is a DZone MVB and is not an employee of DZone and has posted 20 posts at DZone. You can read more from them at their website. View Full User Profile

Common Sense and Code Quality, Part 1

03.09.2012
| 10065 views |
  • submit to reddit

If you are involved in a software project (as an individual coder, technical team lead, architect or project manager) chances are that code quality might not be the first thing on your mind. However, the truth is, it needs to be on everyone's radar. It is one of those things that needs well thought out strategy and continued focus throughout the project's life-cycle. Otherwise it simply spirals out of control and comes back to bite when the project can ill afford a quality issue. 

 

This article takes a very simplistic and common sense approach to code quality. The intent is to demystify code quality and help project teams pick processes and tools that make sense to them. 

To help contain the scope of this article, I have restricted the rest of the discussion to a Java / J2EE based technology project in an enterprise scenario. The basic definition of quality, and ways to ensure that, should be similar in technology projects using other technology stacks and operating in a non corporate world e.g. in the open source arena.

Who should care about code quality?

Let's start with a quick questionnaire:  

  1. Do you deliver and / or review code written in Java?
  2. Do you manage / update / configure any 3rd party product written in Java? 
  3. Do you contribute code in any java project which has legacy code? 
  4. Do you contribute code in any java project which has a sizeable number of classes (say more than 100) and you want to have a grasp on interdependence of those classes?  
  5. Are you interested in assessing if there are structural issues in a given java project? 

If the answer is yes to any / many of these questions, you should care about code quality.

You might not have realized it yet, and code quality (measuring, ensuring, delivering) might not show up as a distinct item in your role and responsibilities, but it is only a matter of time before it will catch up with you and cause grief if left unaddressed. It is a much better approach to handle this monster proactively.

What is a high quality code anyway?

If you google it up or discuss this, you generally get two types of answers. 

The first type is generic *ity stuff (Flexibility, Reusability, Portability, Maintainability, Reliability, Testability, etc.). While they are important, it is not always clear as to how exactly to measure them and how exactly to improve them. 

The second type is highly specific technical parameters (cyclometic complexity, Afferent coupling, Efferent coupling, etc.) There are well documented mathematical formulae to calculate these parameters, software that will calculate them for you, and relatively easy to get to a concrete actionable that will improve these numbers. However, converting the improvement in numbers to improvement in code quality remains a specialized skill. 

So, net net, there is no easy answer. Let's try to change that. Let's put a series of questions that - from a place of common sense - anyone in a team that writes / maintains high quality code base should be able to answer in the affirmative.


Question 1: Are you confident that as you add new code, none of the existing, working functionality will break?

Do you / your team check your code? I think it is safe to assume, yes. Does an average developer on your team check your code more than once a day?  Again, let's assume, yes. Is it possible for an average developer of your team, on an average day, to know off the top of his head, what code other developers have checked and how those code snippets are supposed to work? No. Even if you have all Newtons and Einsteins on your team, it is an emphatic "NO". So, how do you ensure that as the coders are frantically churning out code, they are not actually breaking more than they are creating? 

The answer should be unit testing. Cover as much code as you can cover by unit test. (If your answer is something else could you comment about it in the article, please? I would love to hear about your suggestion.)

Have an automated way of reporting to everyone on the team on the success of all unit tests every morning. If unit tests are broken, fixing them gets the highest priority for the day.

Also have an automated report to everyone on the team every morning reporting on the code coverage percentage. Ideally the code coverage percentage should increase in every report. At the very least it should remain the same. If it goes down on any report, halt everything and investigate. 

My common sense says that this has to be the most important code quality measure and process. (Again, if you have a different opinion, please leave a comment.) Fortunately, sorting out this bit, is comparatively easy. Just use these toolsets:  

  1. Unit testing framework: JUnit, TestNG
  2. Unit test coverage tool: EclEmma, Cobertura 
  3. A build tool: Maven, Ant 
  4. A continuous integration tool: Jenkins, TeamCity
  5. A web dashboard for the report: Sonar 
I am not saying this is the single / best answer. All I am saying is that if you don't have a better answer, this answer is easy, free and it works.

One note of caution. Many times, when teams start with this, someone googles around and finds out that good quality products are supposed to have 80% unit test coverage. In comparison the product turns out to be in a much worse state. This has many implications including morale and political issues. It is important to emphasize here that 80% code coverage in isolation does not guarantee anything. What is really important is to get a working process in place and continuously improve the test coverage. 

 

Question 2: As you add new code, are you sure you are not committing the same silly mistakes that coders generally do? E.g. did you free up all resources in final block?  

Anyone who codes commits mistakes. You are lucky if the compiler catches them for you and spits out a stack trace. But what about those that the compiler does not catch but that the coding community knows from experience to be bad code? If you happened to work on banking software a decade ago, the only way to catch the silly mistakes was by having someone senior from the team to review your code. Things have not changed much. You should still have an extra pair of eyes look at your code and design. But luckily there is some help as well. You could use this toolset:

  1. Any source code analyzer: PMD, Checkstyle, Findbugs, Crap4j
  2. A build tool: Maven, Ant 
  3. A continuous integration tool: Jenkins, TeamCity
  4. A web dashboard for the report: Sonar 
Again, I am not saying this is the single / best answer. All I am saying, if you don't have a better answer, this answer is easy, free and it works.

One note of caution. Most of the projects which start with these are inundated with hundreds (if not thousands) of items flagged by these source code analyzers. It is very important to spend some time upfront with these tools and throttle back the reporting. Fortunately, it is very easy to add / delete rules to these source code analyzers effectively configuring these to report only what you / your team thinks is worthy of flagging. The trick is to ensure that the rules are relevant to your team and the reports are treated with utmost respect. It is no good if the tools keep reporting a bunch of issues and nobody in the team is either convinced that they are relevant or nobody is sure who is expected to fix them. 

I will draw part 1 of this article to a close here. The first couple of questions that we have discussed in this article, I believe, are the most important. They should be taken up first by any technology project which sees value in having a handle on the quality of code. The next part will touch on advanced topics like structural analysis, mutation testing etc.

Until then, Happy Coding!

 

Published at DZone with permission of Partha Bhattacharjee, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Alexander Rubinov replied on Sun, 2013/02/03 - 5:11pm

Hello Partha,

I am fully agree with your opinion. The code quality should be supervised automatically and violation should lead to build fail. 

There is a new open source quality tool that might be interesting for you called CODERU (http://coderu.org , developed by me to support my current project) that uses quite other approach as FindBugs or PMD

While FindBugs and PMD have a focus on the method and algorithm level, CODERU addresses structural quality on package and therefor classes dependency level.

CODERU force you and your teem member to write layered and component oriented code by follow predefined coding rules.

The rules are simple, but prevent arising a complex design problems.

The CODERU-rules rely on reserved package names and the allowed dependency rules between them expressed in a general way.

Unlike other tool forcing you to define allowed or disallowed individual package dependencies CODERU is based on a fixed set of general rules. The dependencies between packages need not be defined explicitly.

For more information visit a tool home page .

Ciao, Alexander

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.