Abstract

Software testing is an essential activity for quality assurance, but, it is an error-prone and effort consuming task when conducted manually. Because of this, the use of automated tools is fundamental, as well as, the evaluation of these tools in practice. However, there is not so much evidence on how such tools perform on highly-configurable systems. Highly-configurable systems are commonly observed in industry as an approach to develop families of products, where products have different configuration options to meet customer needs. To fulfill such a gap, this paper reports results on the use of the tool Randoop, which is widely used in industry, to test variants of the Graph Product Line (GPL) family of products. Our goal is to evaluate reusability of a test data set generated by Randoop for one product when reused for testing other GPL products. Besides, we also investigate the impact of using different values of runtime, the main Randoop parameter, on the number of reused test data. The results show that the used value for runtime in general does not contribute to increase the coverage of test data reused in different products. Furthermore, similarity among products does not ensure a greater reusability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.