On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques,
H. Do and G. Rothermel,
IEEE Transactions on Software Engineering
V. 32, No. 9, 2006, pages 733-752.

Abstract

Regression testing is an important activity in the software lifecycle, but it can also be very expensive. To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One potential goal of test case prioritization techniques is to increase a test suite's rate of fault detection (how quickly, in a run of its test cases, that test suite can detect faults). Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited primarily to hand-seeded faults, largely due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults, and that the use of hand-seeded faults can be problematic for the validity of empirical results focusing on fault detection. We have therefore designed and performed two controlled experiments to assess the ability of prioritization techniques to improve the rate of fault detection of test case prioritization techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. More important, comparing our results to those collected with hand-seeded faults, reveals several implications for researchers performing empirical studies of test case prioritization techniques, and testing techniques in general.