Abstract
This article defines embeddings between state-based and action-based probabilistic logics which can be used to support probabilistic model checking. First, we slightly modify the model embeddings proposed in the literature to allow invisible computation steps and the preservation of forward and backward bisimulation relations. Next, we propose the syntax and semantics of an action-based Probabilistic Computation Tree Logic (APCTL) and an action-based PCTL* (APCTL*) interpreted over action-labeled discrete-time Markov chains (ADTMCs). We show that both these logics are strictly more expressive than the probabilistic variant of Hennessy–Milner logic (prHML). We define an embedding aldl which can be used to construct APCTL* formulae from PCTL* formulae and an embedding sldl from APCTL* formulae to PCTL* formulae. Similarly, we define the embeddings \(aldl^{\prime }\) and \(sldl^{\prime }\) from PCTL to APCTL and APCTL to PCTL, respectively. We also define the reward-based variant of APCTL (APRCTL) interpreted over action-based Markov Reward Models (AMRM), and accordingly modify the logical embeddings \(aldl^{\prime }\) and \(sldl^{\prime }\) which allows us to take into account the notion of rewards. Additionally, we also show that the idea of rewards can be used to reason about the bounded until operator in PCTL and APCTL. Finally, we prove that our logical embeddings combined with the model embeddings enable one to minimize, analyze, and verify probabilistic models in one domain using state-of-the-art tools and techniques developed for the other domain. In order to validate the efficacy of our theoretical framework, we apply it to two case studies using the probabilistic symbolic model checker (PRISM).
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have