Abstract

Research literature on Probabilistic Model Checking (PMC) encompasses a well-established set of algorithmic techniques whereby probabilistic models can be analyzed. In the last decade, owing to the increasing availability of effective tools, PMC has found applications in many domains, including computer networks, computational biology and robotics. In this paper, we evaluate PMC tools – namely COMICS, MRMC and PRISM – to investigate safe reinforcement learning in robots, i.e., to establish safety of policies learned considering feedback signals received upon acting in partially unknown environments. Introduced in previous contributions of ours, this application is a challenging domain wherein PMC tools act as back-engines of an automated methodology aimed to verify and repair control policies. We present an evaluation of the current state-of-the-art PMC tools to assess their potential on various case studies, including both real and simulated robots accomplishing navigation, manipulation and reaching tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call