Abstract

Herein, a visual simultaneous localization and mapping (SLAM) is proposed in which both points and lines are extracted as features and a deep neural network is adopted for loop detection. Its working principles, including the representation, extraction, description, and matching of lines, initialization, keyframe selection, optimization of tracking and mapping, and loop detection using a deep neural network, are set forth in detail. The overall trajectory estimation and loop detection performance is investigated using the TUM RGB‐D (indoor) benchmark and KITTI (outdoor) datasets. Compared with the conventional SLAMs, the experimental results of this study indicate that the proposed SLAM is able to improve the accuracy and robustness of trajectory estimation, especially for the scenes with insufficient points. As for loop detection, the deep neural network turns out to be superior to the traditional bag‐of‐words model, because it decreases the accumulated errors of both the estimated trajectory and reconstructed scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.