Abstract

Visual navigation in unknown environments poses significant challenges due to the presence of many obstacles and low-texture scenes. These factors may cause frequent collisions and tracking failure of feature-based visual Simultaneous Localization and Mapping (vSLAM). To avoid these issues, this paper proposes a spatial memory-augmented visual navigation system that combines a vSLAM module, a conventional global planner module, and a Hierarchical Reinforcement Learning (HRL)-based local planner module. Firstly, a real-time vSLAM named Salient-SLAM is proposed to augment the performance of visual navigation. Salient-SLAM creates a navigation mapping thread by combining a saliency prediction model to build a navigation map that categorizes environmental regions as occupied, explored, or noticeable. Spatial memory that contains spatial abstraction and saliency information of the environment can be further formed by encoding navigation maps, which helps the agent determine an optimal path towards its destination. An open-sourced saliency dataset is proposed to train the saliency prediction model by mimicking the visual attention mechanism. Secondly, a HRL method is proposed to automatically decompose local planning into a high-level policy selector and several low-level policies, where the latter produces actions to interact with the environment. We maximize entropy and minimize option correlation in learning low-level policies, aiming at acquiring diverse and independent behaviors. The simulation results show that the proposed HRL method outperforms competitive baselines by 6.29–10.85 % on Success Rate (SR) and 3.87–11.1 % on Success weighted by Path Length (SPL) metrics. By incorporating the spatial memory, SR, and SPL metrics can be augmented by an average of 9.85 % and 10.89 %, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call