Abstract

As software evolves, test suite continually grows larger. However running all the test cases in the test suite is prohibitive in most cases. To reduce the cost of regression testing, we can optimize test case execution schedule to maximize the early fault detection rate of the original test suite. Different from previous research, we use classification algorithms to guide the schedule process based on code change information and running result analysis. In particular, we firstly train a classifier for each test case using both the code change information and the running result in previous versions. Then we secondly use the trained classifier to estimate the fault detection probability of the test case in a new version. Finally we generate a test case execution schedule report based on the fault detection probability of all the test cases. To verify the effectiveness of our approach, we performed an empirical study on Siemens Suite, which includes 7 real programs written by C programming language, and chose some typical classification algorithms, such as decision tree classifier, Bayes classifier, or nearest neighbor classifier. Based on the final result, we find that in most cases, our approach can outperform a random approach and then further provide a guideline for achieving cost-effective test case execution schedule when using our approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.