Abstract

In this article, we propose a visual simultaneous localization and mapping (SLAM) method by predicting and updating line flows that represent sequential 2-D projections of 3-D line segments. While feature-based SLAM methods have achieved excellent results, they still face problems in challenging scenes containing occlusions, blurred images, and repetitive textures. To address these problems, we leverage a line flow to encode the coherence of line segment observations of the same 3-D line along the temporal dimension, which has been neglected in prior SLAM systems. Thanks to this line flow representation, line segments in a new frame can be predicted according to their corresponding 3-D lines and their predecessors along the temporal dimension. We create, update, merge, and discard line flows on-the-fly. We model the proposed line flow based SLAM (LF-SLAM) using a Bayesian network. Extensive experimental results demonstrate that the proposed LF-SLAM method achieves state-of-the-art results due to the utilization of line flows. Specifically, LF-SLAM obtains good localization and mapping results in challenging scenes with occlusions, blurred images, and repetitive textures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.