Abstract

The detection of true loop closure in Visual Simultaneous Localization And Mapping (vSLAM) can help in many ways, it helps in re-localization, improves the accuracy of the map, and helps in registration algorithms to obtain more accurate and consistent results. The loop closure detection is affected by many parameters, including illumination conditions, seasons, different viewpoints and mobile objects. This paper proposes a novel approach based on super dictionary different from traditional BoW dictionary that uses more advanced and more abstract features of deep learning. The proposed approach does not need to generate vocabulary, which makes it memory efficient and instead it stores exact features, which are small in number and hold very less amount of memory as compared to traditional BoW approach in which each frame holds the same amount of memory as the number of words in the vocabulary. Two deep neural networks are used together to speed up the loop closure detection and to ignore the effect of mobile objects on loop closure detection. We have compared the results with most popular Bag of Words methods DBoW2 and DBoW3, and state-of-the-art iBoW-LCD using five publicly available datasets, and the results show that the proposed method robustly performs loop closure detection and is eight times faster than the state-of-the-art approaches of a similar kind.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.