Abstract

Place recognition retrieves scenes from a prior map and identifies previously visited locations. It is essential to autonomous-driving and robotics in terms of long-term navigation and global localization. Although the semantic graph has made great progress in 3D place recognition, its construction is greatly affected by the complex dynamic environment. In response, this paper designs a scene graph transformation network to solve place recognition in a dynamic environment. To remove the interference of moving objects, we introduce a moving object segmentation (MOS) module. Besides, we design a scene graph transformer-attention module to generate a more discriminative and representative global scene graph descriptor, which significantly improves the performance of place recognition. In addition, we integrate our place recognition method for loop closure with an existing LiDAR-based odometry, boosting its localization accuracy. We evaluate our method on the KITTI and Oxford RobotCar datasets. Many experimental results show that our method can effectively accomplish place recognition, and its accuracy and robustness improve by at least 3% when compared with existing state-of-the-art methods, such as DiSCO. To illustrate the generalization capabilities of our method, we evaluate it on the KITTI-360 and NCLT datasets while using only KITTI for training. The experiments show that our scene graph descriptor can achieve accurate loop-closure and global localization in never-seen environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.