Abstract
Mobile robots operating in public environments require the ability to navigate among humans and obstacles in a socially compliant and safe manner. Previous work has shown the power of deep reinforcement learning (DRL) techniques by employing them to train efficient policies for robot navigation. However, most DRL-based robot navigation methods only consider dynamic pedestrians and do not take static obstacles into account. The ability to approach pedestrians and obstacles differently will improve a robot's navigation efficiency. In this work, we propose a novel network, the obstacle-robot uni-action (ORU) network, to encode the one-way direct effects of obstacles on the robot. Obstacles' indirect effects on the robot, represented by the obstacle-human uni-action (OHU), together with human–human interaction (HHI), are concatenated to a human–robot interaction (HRI) network to obtain the features of a crowd's effects on the robot. We also implement a variable reaching goal reward and an approaching goal reward in our model, which can enhance the method's performance in terms of navigation time. Results of test experiments in both simulation and real datasets demonstrate that the proposed method outperforms state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.