Abstract

This work presents improvements to the state-of-the-art algorithms for path planning and exploration of unknown and complex environments using Deep Reinforcement Learning. Our novel approach takes into consideration: (i) map information, built online by the robot using a Simultaneous Localization and Mapping algorithm and (ii) uncertainty of the robot's pose, which leads to active loop-closing to encourage exploration and better map generation within two agents. The results show that the map completeness-based reward function outperforms literature's results on shorter trajectories, thus, better performance; while uncertainty-based with loop-closing reward function improves map generation. Both agents showed the ability, to perform Active SLAM over complex environments and generalization to unseen maps capabilities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.