Abstract

Hand-crafted point descriptors have been traditionally used for visual loop closure detection. However, in low-textured environments, it is usually difficult to find enough point features and, hence, the performance of such algorithms degrade. Under this context, this paper proposes a loop closure detection method that combines lines and learned points to work, particularly, in scenarios where hand-crafted points fail. To index previous images, we adopt separate incremental binary Bag-of-Words (BoW) schemes for points and lines. Moreover, we adopt a binarization procedure for features’ descriptors to benefit from the advantages of learned features into a binary BoW model. Furthermore, image candidates from each BoW instance are merged using a novel query-adaptive late fusion approach. Finally, a spatial verification stage, which integrates appearance and geometry perspectives, allows us to enhance the global performance of the method. Our approach is validated using several public datasets, outperforming other state-of-the-art solutions in most cases, especially in low-textured scenarios.

Highlights

  • Simultaneous Localization and Mapping (SLAM) is a fundamental task in robotics that allows an agent to build a map of an unknown environment while, at the same time, tracks its position within this map (Cadena et al 2016)

  • We conduct a set of experiments on several publicly available datasets to evaluate the performance of LiPo-Loop Closure Detection (LCD)++

  • This work has introduced LiPo-LCD++, an appearancebased LCD approach that combines point and line features to increase the performance in low-textured environments

Read more

Summary

Introduction

Simultaneous Localization and Mapping (SLAM) is a fundamental task in robotics that allows an agent to build a map of an unknown environment while, at the same time, tracks its position within this map (Cadena et al 2016). Standard BoW schemes do not account for the spatial distribution of features in the image, which tends to reduce their accuracy under severe perceptual aliasing conditions (Galvez-López and Tardos 2012; Garcia-Fidalgo and Ortiz 2018) Due to this reason, a final spatial verification step is usually performed to check consistency at the geometric level for the resulting correspondences between the query and candidate images. After choosing an image as a final loop candidate, the loop closure is further validated performing a spatial verification procedure which naturally integrates point and line features Our approach, as it is shown in the experimental results section, outperforms other solutions for generic environments, exhibiting a remarkable performance level for low-textured scenarios. The rest of the paper is organized as follows: Sect. 2 overviews the most important works in the field; Sect. 3 describes the proposed approach, while Sect. 4 discusses on a set of experimental results to evaluate LiPo-LCD++ performance; Sect. 5 concludes the paper and suggests some future research lines

Related work
Proposed approach
Image description
Point description
Line description
Searching for loop closure candidates
Overview
Computation of feature relevance
Combination of scores
Dynamic islands computation for loop candidates filtering
Spatial verification
Feature matching
Local Neighborhood Consistency Assessment
Experimental results
Methodology
Effectiveness of the binary descriptors
LCD performance breakdown
LCD accuracy evaluation
Computational times
Evolution of the vocabulary size
Comparison with other solutions
Conclusions and future work

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.