Understanding the Effects of Changes on the Cost-Effectiveness of Regression Testing Techniques
S. Elbaum, P. Kallakuri, A. G. Malishevsky, G. Rothermel, and S. Kanduri
Journal of Software Testing, Verification, and Reliability
V. 13, no. 2, June, 2003, pages 65-83

Abstract

Regression testing is an expensive testing process used to validate modified software. Regression test selection and test case prioritization can reduce the costs of regression testing by selecting a subset of test cases for execution, or scheduling test cases to better meet testing objectives. The cost-effectiveness of these techniques can vary widely, however, and one cause of this variance is the type and magnitude of changes made in producing a new software version. Engineers unaware of the causes and effects of this variance can make poor choices in designing change integration processes, selecting inappropriate regression testing techniques, designing excessively expensive regression test suites, and making unnecessarily costly changes. Engineers aware of causal factors can perform regression testing more cost-effectively. This article reports results of an embedded, multiple case study investigating the modifications made in the evolution of four software systems, and their impact on regression testing techniques. The results of this study expose tradeoffs and constraints that affect the success of techniques, and provide guidelines for designing and managing regression testing