Abstract
In many domains, such as avionics, oil and gas, and maritime, a common practice is to derive and execute test cases manually from requirements, where both requirements and test cases are specified in natural language (NL) by domain experts. The manual execution of test cases is largely dependent on the domain experts who wrote the test cases. The process of manual writing of requirements and test cases introduces ambiguity in their description and, in addition, test cases may not be effective since they may not be derived by systematically applying coverage criteria. In this paper, we report on a systematic approach to support automatic derivation of manually executable test cases from use cases. Both use cases and test cases are specified in restricted NLs along with carefully-defined templates implemented in a tool. We evaluate our approach with four case studies (in total having 30 use cases and 579 steps from flows of events), two of which are industrial case studies from the oil/gas and avionics domains. Results show that our tool was able to correctly process all the case studies and systematically (by following carefully-defined structure coverage criteria) generate 30 TCSs and 389 test cases. Moreover, our approach allows defining different test coverage criteria on requirements other than the one already implemented in our tool.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have