Abstract

Real-world robot applications usually require navigating agents to face multiple destinations. Besides, the real-world crowded environments usually contain dynamic and static crowds that implicitly interact with each other during navigation. To address this challenging task, a novel modular hierarchical reinforcement learning (MHRL) method is developed in this paper. MHRL is composed of three modules, i.e., destination evaluation, policy switch, and motion network, which are designed exactly according to the three phases of solving the original navigation problem. First, the destination evaluation module rates all destinations and selects the one with the lowest cost. Subsequently, the policy switch module decides which motion network to be used according to the selected destination and the obstacle state. Finally, the selected motion network outputs the robot action. Owing to the complementary strengths of a variety of motion networks and the cooperation of modules in each layer, MHRL is able to deal with hybrid crowds effectively. Extensive simulation experiments demonstrate that MHRL achieves better performance than state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.