I am a software engineer, database developer, web developer, social media user, programming geek, avid reader, sports fan, data geek, and statistics geek. I do not care if something is new and shiny. I want to know if it works, if it is better than what I use now, and whether it makes my job or my life easier. I am also the author of RegularGeek.com and Founder of YackTrack.com, a social media monitoring and tracking tool. Robert is a DZone MVB and is not an employee of DZone and has posted 103 posts at DZone. You can read more from them at their website. View Full User Profile

Survey Says: Developers Think Testing Is Failing

06.30.2010
| 4683 views |
  • submit to reddit

Electric Cloud, a leading provider of software production management solutions,  completed a survey of software development professionals (developers, testers and managers). One of the major leads in the results was “the majority of software bugs are attributed to poor testing procedures or infrastructure limitations, not design problems.” Obviously, I was going to keep reading. Granted, the quote was a little overstated, but there were some very interesting points in the survey results.

So, what did they find? First, “Fifty-eight percent of respondents pointed to problems in the testing process or infrastructure as the cause of their last major bug found in delivered or deployed software.” Now, let me translate that. A major defect is typically a problem in software that causes the application to crash, save data in an inappropriate state or display information in an appropriate manner. As a general rule, if a major defect is found after the application is deployed into production, it is a failing of the testing process. When I say testing process, I am spreading blame around. Developers did not have appropriate unit tests, QA did not have the appropriate test plans and the management team likely did not have the appropriate people or hardware allocated to the project. Essentially, major defects are the fault of the entire team and should not be occurring.

There were two pieces of very bad news for the software industry. Only 12 percent of software development organizations are using fully automated test systems. This is a damn shame, but there are also some good reasons for this. Automating user interface (UI) testing is extremely hard, time consuming and extremely fragile. So, it does not surprise me that fully automated testing is rare. Thankfully, less than 10 percent are only doing manual testing and no automation. Another major issue was that 53 percent of respondents said their testing is limited by compute resources. Given how cheap hardware is anymore, this should not be an issue. However, limitations of the hardware environment continues to plague the industry. I know people want to buy a decent server for testing, but sometimes a simple desktop machine that costs less than $500 will suffice, especially if it is a UI automation machine. I understand we all need to control costs, but the money on hardware is nothing compared to the amount of man hours needed for manual testing.

People not in software development may wonder why all of this talk about defects and automated testing matters. Well, the survey had a really good point about this. 56 percent estimated that their last significant software bug resulted in an average of $250,000 in lost revenue and 20 developer hours to correct. Lost revenue is always a big deal, but do not underestimate the cost of developer hours. Let’s assume that the cost of a developer is $800 per day or $100 per hour on average. 20 developer hours equates to $2000, which does not seem like a big deal. However, there is also the testing time required, about 30% of development time, adding about $600. There is also management time required which for defect resolution is fairly high. Add in another 25% of the development time for each management person involved, typically a development manager, a QA manager and a VP/Director level person. The management team will typically cost more as well, about $125 per hour, which adds $725 per person, or $2175. There are also the operational costs of deploying the corrected application into a QA environment and the production environment. This is about 3 hours at the same rate as a developer, so we add another $1800. This brings our total cost, $2000 + $600 + $2175 + $1800, to $6575 plus all of the stress that goes with the fire drill. There is also the potential opportunity cost of delaying other projects because of the people required to fix this defect, but opportunity cost is hard to quantify without a specific context. All of this may not seem like a lot of money, but for a smaller departmental budget it could be very important. Also, compare the $6575 to the cost of the desktop PC that could have been used for automated tests that find that defect. This defect could have cost 10X the cost of the hardware.

A lot of people are probably agreeing with me in terms of the costs, but disagreeing with my simple “just add testing” idea. Typically, the problem is that basic automated testing is not easy. Good automated testing is hard. Automated web testing is even harder. However, automated testing is not expensive. There are a bunch of free tools available that many of your developers are probably familiar with. Some of the tools are useful for unit testing, others help with testing with a database and there are others that help with web testing. Another issue that keeps getting raised is that it is too hard to get 100% test coverage. I am not the first person to say this, but do not even try to get 100% test coverage. Developers do need a significant level of unit test coverage, probably above 90%, but acceptance tests and integration tests do not need that much test coverage. One of the best parts of testing is that once a defect is found, you can write tests for it. That way you ensure that if all of your tests pass, you have fixed the defect and it should not recur in the future.

So, there is nothing really stopping you from a solid testing plan. If people and opinions get in the way, then you can point to some of the information in the survey. If management says that writing automated tests will cost too much, ask them if it costs more than $250,000.

 

From http://regulargeek.com/2010/06/03/survey-says-developers-think-testing-is-failing

Published at DZone with permission of Robert Diana, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Daniel Alexiuc replied on Wed, 2010/06/30 - 1:14am

This comment is not about acceptance/integration tests, but touches on a point you made about not even trying to achieve 100% unit testing coverage.

If you've already got 90% unit test coverage - then why not go for 100%?

-You get the benefits of seeing all portions of unused code instantly
-It will guarantee that all thrown runtime exceptions are handled
-it's not that hard to get from 90% to 100% (using a few exclude filters and the like), and it is very easy to stay there once you have arrived.
-it makes it very easy to see which parts of your code is not tested yet.

Rob Grzywinski replied on Wed, 2010/06/30 - 6:54am

Mr. Alexiuc, unfortunately 100% code coverage doesn't tell you anything about the quality of the code.  The types of testing used (system, integration, positive, negative, etc) combined with code coverage can be a much better indicator.

There's already a series of Javalobby discussions around this, for example:  http://java.dzone.com/articles/when-100-coverage-gives-us

Cloves Almeida replied on Thu, 2010/07/08 - 8:33pm

Some code (like UI building, or getter/setters) is too trivial or too hard to cover. It's a time better spent on other important stuff, like proper refactoring.

Michael Eric replied on Wed, 2012/09/26 - 3:42pm

It seems to me that defects are most commonly caused by mistakes in programming, often caused by mistakes in communication between requirements givers and programmers, sometimes caused by mistaken requirements.

Defects are never caused by testers or testing. Sometimes mistakes by these other people could have been detected by better testing (by someone), but the cause of software defects may not appropriately be assigned to “poor testing” in my opinion.

linux archive 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.