Abstract

Loop closure detection is essential in visual simultaneous localization and mapping systems to recognize previously visited scenes, reducing pose and map estimates uncertainty. However, loop closure detection is highly challenging due to perceptual aliasing and scene variations in real-world environments caused by dynamic objects, changes in viewpoint, and illumination variations. This paper introduces a novel plug-and-play model, LoopNet, to find similarities between scenes via determining key landmarks to focus on without being distracted by scene variations. This proposed multi-scale attention-based Siamese convolutional model learns feature embeddings that focus on the discriminative objects in the scene instead of holistic features. We show that our method outperforms the state-of-the-art approaches in indoor and outdoor environments while being robust to scene variations and perceptual aliasing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.