Abstract
We propose a method for efficient training of Q-functions for continuous-state Markov Decision Processes (MDPs), such that the traces of the resulting policies satisfy a given Linear Temporal Logic (LTL) property. LTL, a modal logic, can express a wide range of time-dependent logical properties (including "safety") that are quite similar to patterns in natural language. We convert the LTL property into a limit deterministic Buchi automaton and construct an on-the-fly synchronised product MDP. The control policy is then synthesised by defining an adaptive reward function and by applying a modified neural fitted Q-iteration algorithm to the synchronised structure, assuming that no prior knowledge is available from the original MDP (namely, the method is model-free). The proposed method is evaluated in a numerical study to test the quality of the generated control policy and is compared with conventional methods for policy synthesis, such as MDP abstraction (Voronoi quantizer) and approximate dynamic programming (fitted value iteration).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.