Abstract

Simultaneous Localization and Mapping (SLAM) is the basis for intelligent mobile robots to work in unknown environments. However, traditional feature extraction algorithms that traditional visual SLAM systems rely on have difficulty dealing with texture-lessregions and other complex scenes, which limits the development of visual SLAM. The studies of feature points extraction adopting deep learning show that this method has more advantages than traditional methods in dealing with complex scenes, but these studies consider accuracy while ignoring the efficiency.To solve these problems, this paper proposes a deep-learning real-time visual SLAM system based on multi-task feature extraction network and self-supervised feature points. By designing a simplified Convolutional Neural Network (CNN) for detecting feature points and descriptors to replace the traditional feature extractor, the accuracy and stability of the visual SLAM system are enhanced.The experimental results in a dataset and real environments show that the proposed system can maintain high accuracy in a variety of challenging scenes, run on a GPU in real-time, and support the construction of dense 3D maps. Moreover, its overall performance is better than the current traditional visual SLAM system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.