Abstract

Many software processes advocate that the test code should co-evolve with the production code. Prior work usually studies such co-evolution based on production-test co-evolution samples mined from software repositories. A production-test co-evolution sample refers to a pair of a test code change and a production code change where the test code change triggers or is triggered by the production code change. The quality of the mined samples is critical to the reliability of research conclusions. Existing studies mined production-test co-evolution samples based on the following assumption: if a test class and its associated production class change together in one commit, or a test class changes immediately after the changes of the associated production class within a short time interval, this change pair should be a production-test co-evolution sample . However, the validity of this assumption has never been investigated. To fill this gap, we present an empirical study, investigating the reasons for test code updates occurring after the associated production code changes, and revealing the pervasive existence of noise in the production-test co-evolution samples identified based on the aforementioned assumption by existing works. We define a taxonomy of such noise, including six categories (i.e., adaptive maintenance, perfective maintenance, corrective maintenance, indirectly related production code update, indirectly related test code update, and other reasons). Guided by the empirical findings, we propose CHOSEN (an identifi C ation met H od O f production-te S t co- E volutio N ) based on a two-stage strategy. CHOSEN takes a test code change and its associated production code change as input, aiming to determine whether the production-test change pair is a production-test co-evolution sample. Such identified samples are the basis of or are useful for various downstream tasks. We conduct a series of experiments to evaluate our method. Results show that (1) CHOSEN achieves an AUC of 0.931 and an F1-score of 0.928, significantly outperforming existing identification methods, and (2) CHOSEN can help researchers and practitioners draw more accurate conclusions on studies related to the co-evolution of production and test code. For the task of Just-In-Time (JIT) obsolete test code detection, which can help detect whether a piece of test code should be updated when developers modify the production code, the test set constructed by CHOSEN can help measure the detection method’s performance more accurately, only leading to 0.76% of average error compared with ground truth. In addition, the dataset constructed by CHOSEN can be used to train a better obsolete test code detection model, of which the average improvements on accuracy, precision, recall, and F1-score are 12.00%, 17.35%, 8.75%, and 13.50% respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call