Abstract

Value iteration, policy iteration and their modified versions are well-known algorithms for probabilistic model checking of Markov Decision Processes. One the challenge of these methods is that they are time-consuming in most cases. Several techniques have been proposed to improve the performance of iterative methods for probabilistic model checking. However, the running time of these techniques depends on the graphical structure of the model and in some cases their performance is worse than the performance of the standard methods. In this paper, we propose two new heuristics to accelerate the modified policy iteration method. We first define a criterion for the usefulness of the computations of each iteration of this method. The first contribution of our work is to develop and use a criterion to reduce the number of iterations in modified policy iteration. As the second contribution, we propose a new approach to identify useless updates in each iteration. This method reduces the running time of computations by avoiding useless updates of states. The proposed heuristics have been implemented in the PRISM model checker and applied on several standard case studies. We compare the running time of our heuristics with the running time of previous standard and improved methods. Experimental results show that our techniques yields a significant speed-up.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call