Abstract

Context: Software source code is frequently changed for fixing revealed bugs. These bug-fixing changes might introduce unintended system behaviors, which are inconsistent with scenarios of existing regression test cases, and consequently break regression testing. For validating the quality of changes, regression testing is a required process before submitting changes during the development of software projects. Our pilot study shows that 48.7% bug-fixing changes might break regression testing at first run, which means developers have to run regression testing at least a couple of times for 48.7% changes. Such process can be tedious and time consuming. Thus, before running regression test suite, finding these changes and corresponding regression test cases could be helpful for developers to quickly fix these changes and improve the efficiency of regression testing. Goal: This paper proposes bug- fixing change impact prediction (BFCP), for predicting whether a bug-fixing change will break regression testing or not before running regression test cases, by mining software change histories. Method: Our approach employs the machine learning algorithms and static call graph analysis technique. Given a bug-fixing change, BFCP first predicts whether it will break existing regression test cases; second, if the change is predicted to break regression test cases, BFCP can further identify the might-be-broken test cases. Results: Results of experiments on 552 real bug-fixing changes from four large open source projects show that BFCP could achieve prediction precision up to 83.3%, recall up to 92.3%, and F-score up to 81.4%. For identifying the might-be-broken test cases, BFCP could achieve 100% recall.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call