Abstract

We propose a method for efficient training of Q-functions for continuous-state Markov Decision Processes (MDPs), such that the traces of the resulting policies satisfy a given Linear Temporal Logic (LTL) property. LTL, a modal logic, can express a wide range of time-dependent logical properties (including "safety") that are quite similar to patterns in natural language. We convert the LTL property into a limit deterministic Buchi automaton and construct an on-the-fly synchronised product MDP. The control policy is then synthesised by defining an adaptive reward function and by applying a modified neural fitted Q-iteration algorithm to the synchronised structure, assuming that no prior knowledge is available from the original MDP (namely, the method is model-free). The proposed method is evaluated in a numerical study to test the quality of the generated control policy and is compared with conventional methods for policy synthesis, such as MDP abstraction (Voronoi quantizer) and approximate dynamic programming (fitted value iteration).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call