Abstract

This paper proposes iTD3-CLN, a Deep Reinforcement Learning (DRL) based low-level motion controller, to achieve map-less autonomous navigation in dynamic scene. We consider three enhancements to the Twin Delayed DDPG (TD3) for the navigation task: N-step returns, Priority Experience Replay, and a channel-based Convolutional Laser Network (CLN) architecture. In contrast to the conventional methods such as the DWA, our approach is found superior in the following ways: no need for prior knowledge of the environment and metric map, lower reliance on an accurate sensor, learning emergent behavior in dynamic scene that is intuitive, and more remarkably, able to transfer to the real robot without further fine-tuning. Our extensive studies show that in comparison to the original TD3, the proposed approach can obtain approximately 50% reduction in training to get same performance, 50% higher accumulated reward, and 30–50% increase in generalization performance when tested in unseen environments. Videos of our experiments are available at https://youtu.be/BRN0Gk5oLOc (Simulation) and https://youtu.be/yIxGH9TPQCc (Real experiment).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.