Regression testing is an expensive testing process used to validate modified software. Regression test selection and test case prioritization can reduce the costs of regression testing by selecting a subset of test cases for execution, or scheduling test cases to better meet testing objectives. The cost-effectiveness of these techniques can vary widely, however, and one cause of this variance is the type and magnitude of changes made in producing a new software version. Engineers unaware of the causes and effects of this variance can make poor choices in designing change integration processes, selecting inappropriate regression testing techniques, designing excessively expensive regression test suites, and making unnecessarily costly changes. Engineers aware of causal factors can perform regression testing more cost-effectively. This paper reports results of an embedded, multiple case study investigating the modifications made in the evolution of four software systems, and their impact on regression testing techniques. The results of this study expose tradeoffs and constraints that affect the success of techniques, and provide guidelines for designing and managing regression testing processes.