Abstract

Task and motion planning (TAMP) integrates the generation of high-level tasks in a discrete space and the execution of low-level actions in a continuous space. Such planning integration is susceptible to uncertainties and computationally challenging as low-level actions should be verified for high-level tasks. Therefore, this paper presents a hierarchical task and motion planning method under uncertainties. We utilize Markov Decision Processes (MDPs) to model task and motion planning in a stochastic environment. The motion planner handles motion uncertainty and leverages physical constraints to synthesize an optimal low-level control policy for a single robot to generate motions in continuous action and state spaces. Given the optimal control policy for multiple homogeneous robots, the task planner synthesizes an optimal high-level tasking policy in discrete task and state spaces addressing both task and motion uncertainties. Our optimal tasking and control policies are synthesized through deep reinforcement learning algorithms. The performance of our method is validated in realistic physics-based simulations with two quadrotors transporting objects in a warehouse setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call