Abstract

Delta debugging has been proposed to isolate failure-inducing changes when regressions occur. In this work, we focus on evaluating delta debugging in practical settings from developers’ perspectives. A collection of real regressions taken from medium-sized open source programs is used in our evaluation. Towards automated debugging in software evolution, a tool based on delta debugging is created and both the limitations and costs are discussed.We have evaluated two variants of delta debugging. Different from successful isolation in Zeller's initial studies, the results in our experiments vary wildly. Two thirds of isolated changes in studied programs provide direct or indirect clues in locating regression bugs. The remaining results are superfluous changes or even wrong isolations. In the case of wrong isolations, the isolated changes cause the same behaviour of the regression but are failure-irrelevant. Moreover, the hierarchical variant does not yield definite improvements in terms of the efficiency and accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call