Abstract

Automated test generation can reduce the manual effort in improving software quality. A test generation method employs code coverage, such as the widely-used branch coverage, to guide the inference of tests. These tests can be used to detect hidden faults. An automatic tool takes a specific type of code coverage as a configurable parameter. Given an automated tool of test generation, a fault may be detected by one type of code coverage, but omitted by another. In frequently released software projects, the time budget of testing is limited. Configuring code coverage for a testing tool can effectively improve the quality of projects. In this paper, we conduct a study on whether a fault can be detected by specific code coverage in automated test generation. We build predictive models with 60 metrics of faulty source code to identify detectable faults under eight types of code coverage. In the experiment, an off-the-shelf tool, EvoSuite is used to generate test data. Experimental results based on four research questions show that different types of code coverage result in the detection of different faults; a code coverage can be used as a supplement to increase the number of detected faults if another coverage is applied first; for each coverage, the number of detected faults increases with its cutoff time in test generation. Our result shows that the choice of code coverage can be learned via multi-objective optimization from sampled faults and directly applied to new faults. This study can be viewed as a preliminary result to support the configuration of code coverage in the application of automated test generation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call