Abstract
Loop closure detection is the crucial issue of simultaneous localization and mapping in the field of autonomous driving and robotics. In outdoor large-scale and complex environments, the existing LiDAR-based methods still inevitably suffer from viewpoint, condition changes, and perceptual aliasing. To effectively fill the aforementioned drawbacks, in this article, a novel LiDAR-based multimodule cascaded Siamese convolutional neural networks is developed, named MMCS-Net, which simulates the human-eye mechanism to extract more discriminative and generic feature descriptors. The MMCS-Net is mainly composed of three complementary modules: Siamese full convolutional (CA_SFC) module with cascaded attention, rotation-invariant and topological feature enhancement (RT_E) module, and feature uniqueness enhancement and aggregation compression (UE_AC) module. In particular, the graph structure employed in RT_E can explicitly encode the local topological correlations of point clouds in terms of intensity and geometric clues in parallel. Extensive comparative experiments on KITTI, NCLT, LGSVL, and real vehicle datasets prove that our proposed method outperforms the state-of-the-art methods, and shows high robustness while ensuring the real-time requirements of resource-constrained robots.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.