Abstract

In black-box checking (BBC) incremental hypotheses on the behavior of a system are learned in the form of finite automata, using information from a given set of requirements, specified in Linear-time Temporal Logic (LTL). The LTL formulae are checked on intermediate automata and potential counterexamples are validated on the actual system. Spurious counterexamples are used by the learner to refine these automata. We improve BBC in two directions. First, we improve checking lasso-like counterexamples by assuming a check for state equivalence. This provides a sound method without knowing an upper-bound on the number of states in the system. Second, we propose to check the safety portion of an LTL property first, by deriving simple counterexamples using monitors. We extended LearnLib’s system under learning API to make our methods accessible, using LTSmin as model checker under the hood. We illustrate how LearnLib’s most recent active learning algorithms can be used for BBC in practice. Using the RERS 2017 challenge, we provide experimental results on the performance of all LearnLib’s active learning algorithms when applied in a BBC setting. We will show that the novel incremental algorithms TTT and ADT perform the best. We also provide experiments on the efficiency of various BBC strategies.

Highlights

  • There are many formal methods for analyzing the desired behavior of complex industrial critical systems, such as wafer steppers and X-ray diffraction machines

  • – Two variations of black-box checking algorithms. – A sound black-box checking approach that uses state equivalences, instead of an upper-bound on the number of states in the System Under Learning (SUL). – A novel sound black-box checking approach that uses monitors that may provide counterexamples for safety properties. – A modular design, allowing new model checkers or active learning algorithms to be added or smarter strategies to be implemented for detecting spurious counterexamples. – A thorough reproducible experimental setup, with several combinations of automaton types, AAL algorithms and BBC strategies

  • The key observation why adding properties to verify to the learning algorithm can be useful, follows from the observation that model checking queries are very cheap compared to equivalence queries

Read more

Summary

Introduction

There are many formal methods for analyzing the desired behavior of complex industrial critical systems, such as wafer steppers and X-ray diffraction machines. From a formal methods perspective both liveness (something good eventually happens), and safety (something bad never happens) are essential to the functional reliability of those systems. It is key for testers and developers to have usable tooling available to investigate those liveness and safety properties. There we introduced a state equivalence check, to obtain a sound method without assuming an upperbound on the number of states in the System Under Learning (SUL). In this extended article, we will describe the method and the design of its implementation in more detail. We performed extensive experiments comparing both methods under several BBC strategies, to show how well they perform on an actual case study

BBC among other formal methods
Contributions
Preliminaries
Active learning
Black-box checking with model checking
Black-box checking in the LearnLib
New purposes for queries
The BBC algorithm and strategies: informally
Black-box checking with monitoring
The new API in the LearnLib
CExFirstOracle
The algorithms: formally
Related work
Experimental results
The RERS challenge
Discussion of the algorithms’ performance
Conclusion and future work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call