Abstract. Visual simultaneous localization and mapping technology (VSLAM) provides a theoretical basis for the operation of unmanned equipment such as autonomous vehicles and sweeping robots in unfamiliar environments. Although traditional VSLAM systems have achieved great success after long-term development, it is still difficult to maintain good performance in challenging environments. Deep learning, as a newly developed technology in the field of vision in recent years, has shown outstanding advantages in image processing. Combining deep learning with VSLAM is a hot topic. Deep learning can help traditional VSLAM systems improve the lack of scale information in dynamic environments by improving the performance of traditional VSLAM in depth estimation, pose estimation, and closed loop detection. It can not only reduce the scale of the network model but also improve the accuracy of trajectory estimation. Specifically, in terms of the fusion of VSLAM method flow and deep learning, many researchers have proposed deep learning fusion methods based on visual odometry, loop detection and mapping. This work studies the trend and combination of VSLAM with deep learning algorithms, hoping to provide help for the real autonomy of future mobile robots, and finally puts forward prospects for the development of VSLAM.
Read full abstract