Abstract

System verification is often hindered by the absence of formal models. Peled et al. proposed black-box checking as a solution to this problem. This technique applies active automata learning to infer models of systems with unknown internal structure. This kind of learning relies on conformance testing to determine whether a learned model actually represents the considered system. Since conformance testing may require the execution of a large number of tests, it is considered the main bottleneck in automata learning. In this paper, we describe a randomised conformance testing approach which we extend with fault-based test selection. To show its effectiveness we apply the approach in learning experiments and compare its performance to a well-established testing technique, the partial W-method. This evaluation demonstrates that our approach significantly reduces the cost of learning. In multiple experiments, we reduce the cost by at least one order of magnitude.

Highlights

  • Since Peled et al [26] have shown that active automata learning can provide models of blackbox systems to enable formal verification, this kind of learning has turned into an active area of research in formal methods

  • Active learning of automata in the minimally adequate teacher (MAT) framework, as introduced by Angluin [5], assumes the existence of a teacher. This teacher must be able to answer two types of queries, membership and equivalence queries. The former corresponds to a single test of the system under learning (SUL) to check whether a sequence of actions can be executed or to determine the outputs produced in response to a sequence of inputs

  • We address conformance testing in active automata learning

Read more

Summary

Introduction

Since Peled et al [26] have shown that active automata learning can provide models of blackbox systems to enable formal verification, this kind of learning has turned into an active area of research in formal methods. Active learning of automata in the minimally adequate teacher (MAT) framework, as introduced by Angluin [5], assumes the existence of a teacher. This teacher must be able to answer two types of queries, membership and equivalence queries. The former corresponds to a single test of the system under learning (SUL) to check whether a sequence of actions can be executed or to determine the outputs produced in response to a sequence of inputs. Equivalence queries on the other hand correspond to the question whether a hypothesis model produced by the learner represents the SUL. The teacher either answers affirmatively or with a counterexample showing non-equivalence between the SUL and the hypothesis

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call