Abstract

In real-world environments, robotic task planning is expected to handle both partial observability and unexpected dynamics of the environment. A robust plan for the task requires the robot’s observation actions to concurrently run with the task actions, to observe and adapt to environmental changes. The Partially Observable Markov Decision Process (POMDP) has been widely applied for planning under partially observable domains. For realistic robotic tasks, however, the POMDP model and planning algorithm are quite restrictive and unrealistic. One limitation is that task actions are modelled as atomic entities that only have endpoint effects, with no conditions specified at arbitrary points during task action execution. Also, the observation is obtained only after each task action execution, with no intermediate observations and decision-making during task action execution. To mitigate the limitations of POMDP planning, this paper first proposes an Adjoint Action Model (AAM) that explicitly defines the continuous interaction between robot’s observation and task actions. Then we extend the POMDP task action model with intermediate invariant conditions which specifies the runtime properties of action execution. Finally, we propose the AAM-extended POMDP planning approach which handles observation action planning and task replanning for task action execution. We experimentally demonstrate that the plan from our proposed approach is more effective and robust to cope with the environment dynamics, comparing with the standard POMDP planning approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call