Abstract

A novel semantic loop closure detection (SLCD) method is proposed in this article for visual simultaneous localization and mapping systems. SLCD aims to relieve the instance-level semantic inconsistency issue that arose from dynamic industrial scenes (e.g., autonomous driving in big cities). As the first step in this direction, SLCD fully exploits both low- and high-level video frame information, in a coarse-to-fine way. In SLCD, we adopt a convolutional neural network based object detection to acquire object information from the consecutive frames. Meanwhile, we perform a bag of visual words based similarity calculation to narrow the frames to coarse loop closure candidates. For these candidates, we perform an object matching on them to find their semantic inconsistency cases and remove involved semantic inconsistencies according to their cases. Then, we recalculate the similarity scores for these candidates. Finally, loop closures are determined by the similarity scores and a geometrical verification. Favorable performance of the proposed method is demonstrated by comparing it to other state-of-the-art methods using data from several public datasets and our new Dynamic Scenes dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.