Abstract

AbstractVisual simultaneous localization and mapping (visual‐SLAM) is a prominent technology for autonomous navigation of mobile robots. As a significant requirement for visual‐SLAM, loop closure detection (LCD) involves recognizing a revisited place, thereby helping visual‐SLAM eliminate accumulated errors and obtain consistent maps. Conventional LCD approaches mainly rely on point features to detect the loop. In challenging environments, the performance of point‐based LCD degrades, especially in low‐texture and perceptual aliasing environments. This paper presents a novel point and line‐based LCD. This approach allows for more robust LCD under an environment where point features are scarce and false‐positive loops are easily detected. First, point and line features are extracted to construct visual vocabularies (a point‐based vocabulary and a line‐based vocabulary) by using the bag‐of‐visual‐words model. Second, a novel weighting scheme is proposed by leveraging information entropy to combine the point‐based similarity score and line‐based similarity score to further improve the similarity evaluation accuracy of two images. Finally, a feature matching coherence check is added to determine the loop closure candidate searching area, which avoids false‐positive loops caused by a decrease in the robot's velocity or because the system is still. We compare the proposed method with point‐based LCD, line‐based LCD, and PL‐based (combining points and lines with the feature's number and dispersion) methods on public data sets in terms of precision and recall. The results reveal that the proposed method offers impressive results compared to other approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call