Abstract

This article considers control synthesis of an autonomous agent with linear temporal logic (LTL) specifications subject to environment and motion uncertainties. Specifically, the probabilistic motion of the agent is modeled by a Markov decision process (MDP) with unknown transition probabilities. The operating environment is assumed to be partially known, where the desired LTL specifications might be partially infeasible. A relaxed product MDP is constructed that allows the agent to revise its motion plan without strictly following the desired LTL constraints. A utility function composed of violation cost and state rewards is developed. Rigorous analysis shows that, if there almost surely (i.e., with probability 1) exists a policy that satisfies the relaxed product MDP, any algorithm that optimizes the expected utility is guaranteed to find such a policy. A reinforcement learning-based approach is then developed to generate policies that fulfill the desired LTL specifications as much as possible by optimizing the expected discount utility of the relaxed product MDP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call