Abstract

Background: One way to reduce test cost is to automatise testing tasks. Model-based testing (MBT) tools use system behaviour models as inputs to automatically generate tests. In the literature, few experiments evaluate the impact of using an MBT tool in test case productivity, test coverage, and bug detection rate.Aims: This work consists of the first part of the evaluation of the impacts of using an MBT tool whose input model is based on natural language, the TaRGeT. This MBT tool uses as inputs use case models authored using natural language. We assess the effects of using an MBT tool in test case productivity (number of test steps produced per hour).Method: A quasi-experiment was carried out for evaluating test case productivity when creating functional tests manually vs the productivity when designing tests using TaRGeT. The application size and subjects experience were controlled, and the collected data statistically analysed.Results: The results show the mean productivity when using TaRGeT is 30% higher when compared to the mean productivity when designing tests manually. Despite this difference, it was not possible to detect statistical differences in productivity when using TaRGeT or not.Conclusions: We discuss possible reasons for this behaviour and other findings, moreover we present lessons learned for future experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call