[UPDATE] The original survey can be found at this link at Electric Cloud.

Electric Cloud, a leading provider of software production management solutions,  completed a survey of software development professionals (developers, testers and managers). One of the major leads in the results was “the majority of software bugs are attributed to poor testing procedures or infrastructure limitations, not design problems.” Obviously, I was going to keep reading. Granted, the quote was a little overstated, but there were some very interesting points in the survey results.

So, what did they find? First, “Fifty-eight percent of respondents pointed to problems in the testing process or infrastructure as the cause of their last major bug found in delivered or deployed software.” Now, let me translate that. A major defect is typically a problem in software that causes the application to crash, save data in an inappropriate state or display information in an appropriate manner. As a general rule, if a major defect is found after the application is deployed into production, it is a failing of the testing process. When I say testing process, I am spreading blame around. Developers did not have appropriate unit tests, QA did not have the appropriate test plans and the management team likely did not have the appropriate people or hardware allocated to the project. Essentially, major defects are the fault of the entire team and should not be occurring.

There were two pieces of very bad news for the software industry. Only 12 percent of software development organizations are using fully automated test systems. This is a damn shame, but there are also some good reasons for this. Automating user interface (UI) testing is extremely hard, time consuming and extremely fragile. So, it does not surprise me that fully automated testing is rare. Thankfully, less than 10 percent are only doing manual testing and no automation. Another major issue was that 53 percent of respondents said their testing is limited by compute resources. Given how cheap hardware is anymore, this should not be an issue. However, limitations of the hardware environment continues to plague the industry. I know people want to buy a decent server for testing, but sometimes a simple desktop machine that costs less than $500 will suffice, especially if it is a UI automation machine. I understand we all need to control costs, but the money on hardware is nothing compared to the amount of man hours needed for manual testing.

People not in software development may wonder why all of this talk about defects and automated testing matters. Well, the survey had a really good point about this. 56 percent estimated that their last significant software bug resulted in an average of $250,000 in lost revenue and 20 developer hours to correct. Lost revenue is always a big deal, but do not underestimate the cost of developer hours. Let’s assume that the cost of a developer is $800 per day or $100 per hour on average. 20 developer hours equates to $2000, which does not seem like a big deal. However, there is also the testing time required, about 30% of development time, adding about $600. There is also management time required which for defect resolution is fairly high. Add in another 25% of the development time for each management person involved, typically a development manager, a QA manager and a VP/Director level person. The management team will typically cost more as well, about $125 per hour, which adds $725 per person, or $2175. There are also the operational costs of deploying the corrected application into a QA environment and the production environment. This is about 3 hours at the same rate as a developer, so we add another $1800. This brings our total cost, $2000 + $600 + $2175 + $1800, to $6575 plus all of the stress that goes with the fire drill. There is also the potential opportunity cost of delaying other projects because of the people required to fix this defect, but opportunity cost is hard to quantify without a specific context. All of this may not seem like a lot of money, but for a smaller departmental budget it could be very important. Also, compare the $6575 to the cost of the desktop PC that could have been used for automated tests that find that defect. This defect could have cost 10X the cost of the hardware.

A lot of people are probably agreeing with me in terms of the costs, but disagreeing with my simple “just add testing” idea. Typically, the problem is that basic automated testing is not easy. Good automated testing is hard. Automated web testing is even harder. However, automated testing is not expensive. There are a bunch of free tools available that many of your developers are probably familiar with. Some of the tools are useful for unit testing, others help with testing with a database and there are others that help with web testing. Another issue that keeps getting raised is that it is too hard to get 100% test coverage. I am not the first person to say this, but do not even try to get 100% test coverage. Developers do need a significant level of unit test coverage, probably above 90%, but acceptance tests and integration tests do not need that much test coverage. One of the best parts of testing is that once a defect is found, you can write tests for it. That way you ensure that if all of your tests pass, you have fixed the defect and it should not recur in the future.

So, there is nothing really stopping you from a solid testing plan. If people and opinions get in the way, then you can point to some of the information in the survey. If management says that writing automated tests will cost too much, ask them if it costs more than $250,000.

Enhanced by Zemanta