Abstract

Visual–inertial SLAM systems achieve highly accurate estimation of camera motion and 3-D representation of the environment. Most of the existing methods rely on points by feature matching or direct image alignment using photo-consistency constraints. The performance of these methods usually decreases when facing low textured environments. In addition, lines are also very common in man-made environments and provide geometrical structure information of the environment. In this paper, we increase the robustness of visual–inertial SLAM system to handle these situations by using both points and lines. Our method, implemented based on ORB-SLAM2, makes the combination of points, lines, and IMU measurements in an effective way by selecting keyframes very carefully and handling the outlier lines efficiently. The cost function of bundle adjustment is formed by point, line reprojection errors, and IMU residual errors. We derive the Jacobian matrices of line reprojection errors with respect to the 3-D endpoints of line segments and camera motion. Loop closure detection is decided by both point and line features using the bag-of-words approach. Our method is evaluated on the public EuRoc dataset and compared with the state-of-the-art visual–inertial fusion methods. Experimental results show that our method achieves the highest accuracy on most of testing sequences, especially in some challengeable situations such as low textured and illumination changing environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call