Abstract

Most of the existing visual SLAM schemes rely solely on point or line features to estimate camera trajectory. In some scenes such as texture missing or motion blurring, it’s difficult to find a sufficient number of reliable features, resulting in low positioning accuracy. To extract more features, an RPL-SLAM solution is proposed to extract point features and line features respectively. Ulteriorly, the depth information of RGBD image is used to restore the 3D information of point and line features, improving the accuracy of camera track positioning. RPL-SLAM scheme mainly includes three modules: tracking, local mapping and loop detection. Tracking module extends the application of line feature on the basis of point feature extraction and matching. A SLD line segment extraction algorithm which can eliminate the micro segments and a DBM segment matching algorithm based on word bag are proposed respectively. These two algorithms improve the matching efficiency while ensuring the matching accuracy, and effectively track and locate the camera of each frame. In the local mapping and loop detection module, the Plucker coordinates are applied to express the spatial line and define the re-projection error of the straight line, so that the back-end optimization error model of point-line fusion is unified to solve the instability problem in optimization. RPL-SLAM is tested on TUM RGBD and ICL-NUIM data set respectively, and compared with ORB-SLAM2. The result shows that RPL-SLAM can effectively improve the accuracy of pose estimation and map reconstruction while maintaining real-time performance by fusing point-line features with depth images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call