Abstract
Visual scene understanding and place recognition are the most challenging problems that mobile robots must solve for to achieve autonomous navigation. To reduce the high computational complexity of many global optimal search strategies, a new two-stage loop closure detection (LCD) strategy is developed in this paper. The front-end sequence node level matching (FSNLM) algorithm is based on the local continuity constraint of the motion process, which avoids the blind search for the global optimal match, and matches the image nodes via a sliding window to accurately find the local optimal matching candidate nodesets. In addition, the back-end image level matching (BILM) algorithm combined with an improved semantic model, DeepLab_AE, uses a convolutional neural network (CNN) as a feature detector to extract visual descriptors. It replaces traditional feature detectors that are manually designed by researchers in the computer vision field and cannot be applied to all environments. Finally, the performance of the two-stage LCD algorithm is evaluated on five public datasets, and is compared with the performance of other state-of-the-art algorithms. The evaluation results prove that the proposed method compares favorably with other algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.