Abstract

Visual scene understanding and place recognition are the most challenging problems that mobile robots must solve for to achieve autonomous navigation. To reduce the high computational complexity of many global optimal search strategies, a new two-stage loop closure detection (LCD) strategy is developed in this paper. The front-end sequence node level matching (FSNLM) algorithm is based on the local continuity constraint of the motion process, which avoids the blind search for the global optimal match, and matches the image nodes via a sliding window to accurately find the local optimal matching candidate nodesets. In addition, the back-end image level matching (BILM) algorithm combined with an improved semantic model, DeepLab_AE, uses a convolutional neural network (CNN) as a feature detector to extract visual descriptors. It replaces traditional feature detectors that are manually designed by researchers in the computer vision field and cannot be applied to all environments. Finally, the performance of the two-stage LCD algorithm is evaluated on five public datasets, and is compared with the performance of other state-of-the-art algorithms. The evaluation results prove that the proposed method compares favorably with other algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call