Abstract

Code edits are very common during software development. Specially for agile development, these edits need constant validation to avoid functionality regression. In this context, regression test suites are often used. However, regression testing can be very costly. Test case prioritization (TCP) techniques try to reduce this burden by reordering the tests of a given suite aiming at fastening the achievement of a certain testing goal. The literature presents a great number of TCP techniques. Most of the work related to prioritization evaluate the performance of TCP techniques by calculating the rate of test cases that fail per fault (the APFD metric). However, other aspects should be considered when evaluating prioritization results. For instance, the ability to reduce the spreading of failing test cases, since a better grouping often provides more information regarding faults. This paper presents an empirical investigation for evaluating the performance of a set of prioritization techniques comparing APFD and spreading results. Our results show that prioritization techniques generate different APFD and spreading results, being total-statement prioritization the one with the lowest spreading.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call