Abstract

In mission critical systems a single failure might cause catastrophic consequences. This sets high expectations to timely detection of design faults and runtime failures. By traditional software testing methods the detection of deeply nested faults that occur sporadically is almost impossible. The discovery of such bugs can be facilitated by generating well-targeted test cases where the test scenario is explicitly specified. On the other hand, the excess of implementation details in manually crafted test scripts makes it hard to understand and to interpret the test results. This paper defines high-level test scenario specification language TDLTP for specifying complex test scenarios that are relevant for model-based testing of mission critical systems. The syntax and semantics of TDLTP operators are defined and the transformation rules that map its declarative expressions to executable Uppaal Timed Automata test models are specified. The scalability of the method is demonstrated on the TUT100 satellite software integration testing case study.

Highlights

  • In model-based testing (MBT), the requirements model of System Under Test (SUT) describes the expected correct behavior of the system under possible inputs from its environment

  • In this paper high level test purpose specification language TDLTP, its syntax and semantics have been defined for model-based testing of time critical systems

  • Based on the semantics proposed in this work a mapping from TDLTP to Uppaal Timed Automata (TA) formalism has been defined

Read more

Summary

Introduction

In model-based testing (MBT), the requirements model of System Under Test (SUT) describes the expected correct behavior of the system under possible inputs from its environment. The model, represented in a suitable machine interpretable formalism, can be used to automatically generate the test cases either offline or online, and be used as the oracle that checks if the SUT behavior conforms to this model. In online test generation the model is executed in lock step with the SUT. The test model communicates with SUT via controllable inputs and observable outputs of the SUT. Test description in MBT typically relies on two formal representations, SUT modelling language and the test purpose specification language. An extensive survey on modelling formalisms used in MBT can be found in [20]

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.