Abstract

Model-Based Testing (MBT) is used for generating test suites from system models. However, as software evolves, its models tend to be updated, which may lead to obsolete test cases that are often discarded. Test case discard can be very costly since essential data, such as execution history, are lost. In this paper, we investigate the use of distance functions and machine learning to help to reduce the discard of MBT tests. First, we assess the problem of managing MBT suites in the context of agile industrial projects. Then, we propose two strategies to cope with this problem: (i) a pure distance function-based. An empirical study using industrial data and ten different distance functions showed that distance functions could be effective for identifying low impact edits that lead to test cases that can be updated with little effort. We also found the optimal configuration for each function. Moreover, we showed that, by using this strategy, one could reduce the discard of test cases by 9.53%; (ii) a strategy that combines machine learning with distance values. This strategy can classify the impact of edits in use case documents with accuracy above 80%; it was able to reduce the discard of test cases by 10.4% and to identify test cases that should, in fact, be discarded.

Highlights

  • Software testing plays an important role since it helps gain confidence the software works as expected (Pressman, 2005)

  • Our results evidence that Model-Based Testing (MBT) test suites can be very sensitive to any model evolution

  • A small number of test cases were reused, the teams in our study found the MBT suites useful

Read more

Summary

Introduction

Software testing plays an important role since it helps gain confidence the software works as expected (Pressman, 2005). A test suite can combine manually and automatically executed test cases (Itkonen et al, 2009). Automation is always desired, manually executed test cases are still very important. Itkonen et al (2009) state that manual testing still plays an important role in the software industry and cannot be fully replaced by automatic testing. From the use case description in Listing 1, CLARET generates its equivalent Annotated Labeled Transition System (ALTS) model (Tretmans, 2008) (Figure 2). MBT aims to automatically generate and manage test suites from software specification models. As MBT test suites are derived from specification artifacts, their test cases tend to reflect the system behavior (Utting et al, 2012). On the other hand, regarding MBT limitations, we can list the need for well-built models, huge test suites, and a great number of obsolete test cases during software evolution

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.