Abstract

Autonomous exploration is a critical technology to realize robotic intelligence as it allows unsupervised preparation for future tasks and facilitates flexible deployment. In this paper, a novel Deep Reinforcement Learning (DRL) based autonomous exploration strategy is proposed to efficiently reduce the unknown area of the workspace and provide accurate 2D map construction for mobile robots. Different from existing human-designed exploration techniques that usually make strong assumptions about the scenarios and the tasks, we utilize a model-free method to directly learn an exploration strategy through trial-and-error interactions with complex environments. To be specific, the Generalized Voronoi Diagram (GVD) is first utilized for domain conversion to obtain a high-dimensional Topological Environmental Representation (TER). Then, the Generalized Voronoi Networks (GVN) with spatial awareness and episodic memory is designed to learn autonomous exploration policies interactively online. For complete and efficient exploration, Invalid Action Masking (IAM) is employed to reshape the configuration space of exploration tasks to cope with the explosion of action space and observation space caused by the expansion of the exploration range. Furthermore, a well-designed reward function is leveraged to guide the learning of policies. Extensive baseline tests and comparative simulations show that our strategy outperforms the state-of-the-art strategies in terms of map quality and exploration speed. Sufficient ablation studies and mobile robot experiments demonstrate the effectiveness and superiority of our strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call