Abstract

In the realm of LiDAR-based place recognition tasks, three predominant methodologies have emerged: manually crafted feature descriptor-based methods, deep learning-based methods, and hybrid methods that combine the former two. Manually crafted feature descriptors often falter in reverse visits and confined indoor environments, while deep learning-based methods exhibit limitations in terms of generalization to distinct data domains. Hybrid methods tend to fix these problems, albeit at the cost of an expensive computational burden. In response to this, this paper introduces MixedSCNet, a novel hybrid approach designed to harness the strengths of manually crafted feature descriptors and deep learning models while keeping a relatively low computing overhead. MixedSCNet starts with constructing a BEV descriptor called MixedSC, which takes height, intensity, and smoothness into consideration simultaneously, thus offering a more comprehensive representation of the point cloud. Subsequently, MixedSC is fed into a compact Convolutional Neural Network (CNN), which further extracts high-level features, ultimately yielding a discriminative global point cloud descriptor. This descriptor is then employed for place retrieval, effectively bridging the gap between manually crafted feature descriptors and deep learning models. To substantiate the efficacy of this amalgamation, we undertake an extensive array of experiments on the KITTI and NCLT datasets. Results show that MixedSCNet stands out as the sole method showcasing state-of-the-art performance across both datasets, outperforming the other five methods while maintaining a relatively short runtime.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.