Pros and Cons of Automated Software Testing

PROS of Automated software testing

Automated software testing can:

  • Facilitate the testing process by letting the tool do most of the work (when appropriate)


  • Reduce/eliminate the possibility of human error or human oversight that exists in manual testing


  • Provides a good Return on Investment (see below). I.E., the money saved in finding problems early on and/or money saved


  • By using less manual testers and/or money saved because the tool sped up the process in a “rush-to-market” environment
    Product support from the automated software tool vendor.

CONS of Automated software testing

  • Selecting the most *appropriate* automated software tool for your particular needs can be challenging. With the high price of certain tools on the market, selecting “jut the right” tool needs to be a highly-informed decision. It should not be based on marketing hype or slick ads of a tool vendor. Return on Investment (ROI) is also a risk. Theoretically, ROI = (money saved by automated software tool) / (cost of tool) X 100. For a good ROI, this should be around 200% (see E. Hendrickson’s article at http://www.stickyminds.com/).


  • Choosing what/when to automate can also be tricky: Not every software test case can be automated. Therefore, deciding upon where/when the line is drawn requires skill and lots of professional experience. For example, software verification testing is often a manual (human) effort where as validation is mostly automatic. But, in reality, I don’t think it’s so cut and dry.


  • Maintaining automated software tools as your product evolves. This may partially be the responsibility of the tool vendor. However, no one is more intimately aware of the product than the developer/tester’s native organization. Tool maintenance and upgrade is critical if your product is constantly evolving.


  • Ensuring reliability of test tools: Putting your trust in a test tool is NOT a passive exercise (i.e., by sitting back and letting the tool run the show). The trustworthiness of a tool is only established after it has faithfully performed for an organization on a daily basis. Unfortunately, this can be the school of hard knocks.


  • Unattended testing (such as in an automated software test environment) can lead to cascading failures. That is, errors that crop up which are untraceable. This causes the tested product to be left in an unexpected state. An “error recovery” system may be the answer to this problem. This involves establishing a “baseline state” of the product being tested. These added procedures do add to the complexity of automated software testing. But tool vendors may have all-in-one solutions which provide this ability.


  • Over reliance on automated software testing does not build the skills of individual testers. What will happen when the tool fails? Will the testers know how to work around the problem? The organization that is developing a product needs to weigh all of these variables (based on ROI and other factors) before investing in a particular automated software tool.


  • In summary, testing should be both an automated and a manual effort. The degree to how much either practice is implemented is based on the requirements of the product and the resources the developer has available. While it’s true that automated software test tools have gotten fairly sophisticated, so have the programs being tested. Automated software tools should never replace good, rational human intuition or judgment – they should compliment it – at least in the present state of technology! Automated software tools should also not impose a particular test methodology. Instead, they need to build on a company’s existing test methodology.

More Software Testing information:

top of page