Abstract

Regression testing is widely used to check that changes made to software do not break existing functionality, but regression test suites grow, and running them fully can become costly. Researchers have proposed test-suite reduction and regression test selection as two approaches to reduce this cost by not running some of the tests from the test suite. However, previous research has not empirically evaluated how the two approaches compare to each other, and how well a combination of these approaches performs. We present the first extensive study that compares test-suite reduction and regression test selection approaches individually, and also evaluates a combination of the two approaches. We also propose a new criterion to measure the quality of tests with respect to software changes. Our experiments on 4,793 commits from 17 open-source projects show that regression test selection runs on average fewer tests (by 40.15pp) than test-suite reduction. However, test-suite reduction can have a high loss in fault-detection capability with respect to the changes, whereas a (safe) regression test selection has no loss. The experiments also show that a combination of the two approaches runs even fewer tests (on average 5.34pp) than regression test selection, but these tests still have a loss in fault-detection capability with respect to the changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call