Abstract
In the study of RGB-D SLAM (Simultaneous Localization and Mapping), two types of primary visual features, point and line features, have been widely utilized to calculate the camera pose. As an RGB-D camera can capture RGB and depth information simultaneously, most RGB-D SLAM methods only utilize the 2D information within the point and line features. To obtain a higher accuracy camera pose and utilize the 2D and 3D information within points and lines better, a novel geometric constraint model of points and lines (PL-GM) using an RGB-D camera is proposed in this paper. Our contributions are threefold. Firstly, the 3D points and lines generated by an RGB-D camera combining with 2D point and line features are utilized to establish the PL-GM, which is different from most models of point-line SLAM (PL-SLAM). Secondly, in addition to the 2D re-projection error of point and line features, the constraint errors of 3D points and lines are constructed and minimized likewise, and then a unified optimization model based on PL-GM is extended to the bundle adjustment model (BA). Finally, extensive experiments have been performed on two public benchmark RGB-D datasets and a real scenario sequence. These experimental results demonstrate that our method achieves a comparable or better performance than the state-of-the-art SLAM methods based on point and line features, and point features.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.