Code-coverage-based test data adequacy criteria typically treat all code components as equal. In practice, however, the probability that a test case can expose a fault in a code component varies: some faults are more easily revealed than others. Thus, researchers have suggested that if we could estimate the probability that a fault in a code component will cause a failure, we could use this estimate to determine the number of executions of a component that are required to achieve a certain level of confidence in that component's correctness. This estimate, in turn, could be used to improve the fault-detection effectiveness of test suites. Although this suggestion is intriguing, no empirical studies have directly examined it. We therefore conducted an experiment to investigate the effects of incorporating an estimate of fault exposure probability into the statement coverage test data adequacy criterion. The results highlight several cost-benefits tradeoffs with respect to the incorporation of the estimate.