Abstract

An important goal of automatic testing techniques, including random testing is to achieve high code coverage with a minimum set of test cases. To meet this goal, random testing researchers have proposed many techniques to generate test inputs and method call sequences that yield higher code coverage. However, most proposed random testing techniques are only suitable for toy systems, and they achieve low code coverage rates while generating too many unnecessary test cases on large-scale software systems. We propose GENRED, a tool that utilizes three approaches: input on demand creation and coverage-based method selection techniques that enhance Randoop, a state-of-the-art feedback-directed random testing technique, and finally, a sequence-based reduction technique that removes redundant test cases without executing them. We evaluate GENRED as a tool to test four open-source systems. The results show that these techniques achieve branch coverage improvement by 13.7% and prune 51.8% of the test cases without sacrificing code coverage.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.