Abstract

UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design.

Highlights

  • Behavior models like UML activity diagrams or state machines are used for system modeling, but are a valuable basis for deriving test cases in model-based testing (Utting et al 2012)

  • Three measures for comprehensibility are collected from each diagram type in a controlled experiment: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions and (3) the number of errors made during test case derivation

  • With p values of 0.01491, 0.00005 and 0.851, respectively, we can reject the null hypothesis H1,0 for test steps and test cases but not for test suites, and conclude that significantly more test step and test case errors are made when deriving test cases from UML activity diagrams than from UML state machines

Read more

Summary

Introduction

Behavior models like UML activity diagrams or state machines are used for system modeling, but are a valuable basis for deriving test cases in model-based testing (Utting et al 2012). The fully automated derivation of test cases from UML diagrams is still difficult and challenging This is due to the fact that high-quality UML models, which contain all information needed for automatically deriving test cases, are required for this purpose. Such models are rarely available in practice. We investigate the case which is still practically more relevant than automation: A test designer analyzes a UML activity diagram or state machine and derives several test cases manually in order to achieve test coverage. The difference between UML activity diagrams and state machines has not been compared in controlled experiments, neither with respect to comprehensibility nor with respect to manual test case derivation. Comprehensibility ( called understandability) in different studies is measured by

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call