Abstract

One of the major advantages of model checking over other formal methods is its ability to generate a counterexample when a model does not satisfy is its specification. A counterexample is an error trace that helps to locate the source of the error. Therefore, the counterexample represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the task of counterexample generation has a quantitative aspect. Unlike the previous methods proposed for conventional model checking that generate the counterexample as a single path ending with a bad state representing the failure, the task in PMC is completely different. A counterexample in PMC is a set of evidences or diagnostic paths that satisfy a path formula, whose probability mass violates the probability threshold. Counterexample generation is not sufficient for finding the exact source of the error. Therefore, in conventional model checking, many debugging techniques have been proposed to act on the counterexamples generated to locate the source of the error. In PMC, debugging counterexamples is more challenging, since the probabilistic counterexample consists of multiple paths and it is probabilistic. In this article, we propose a debugging technique based on stochastic games to analyze probabilistic counterexamples generated for probabilistic models described as Markov chains in PRISM language. The technique is based mainly on the idea of considering the modules composing the system as players of a reachability game, whose actions contribute to the evolution of the game. Through many case studies, we will show that our technique is very effective for systems employing multiple components. The results are also validated by introducing a debugging tool called GEPCX (Game Explainer of Probabilistic Counterexamples).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call