Abstract

Autonomous robots using deep reinforcement learning have demonstrated superior capabilities on relatively simple specific tasks, but they often lack high-level, abstract planning capabilities when faced with complex, long-horizon tasks. Even if the autonomous robot successfully achieves long-horizon goals, users find it difficult to trust their decision-making process. To increase user trust in the decision-making process when an autonomous robot executes a long-horizon task, this paper proposes an algorithm that empowers the autonomous agent to explain to users the transition from the current state to the target state in a continuous state space, as well as to explain the errors in user estimates. A framework that uses a graph-based world model is proposed to identify important nodes and reachability between nodes in the decision-making process; based on these nodes and reachability, the model generates the required explanations. To validate our proposed method's ability to generate long-horizon plans and explanations, we conducted experiments in PointMaze environments. Our simulation results confirm the effectiveness of our approach in generating reliable world models for long-horizon tasks. Moreover, our explanations, based on these world models, significantly enhance the user's understanding of the autonomous robotic decision-making process and the system's capabilities and limitations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.