Abstract

In low textured environments, visual Simultaneous localization and mapping (SLAM) only using point features is challenging in tracking sufficient point features, and this leads to poor accuracy and robustness. For this problem, this paper proposed an improved visual-inertial odometry (VIO), which can make better use of the advantage of point and line segment features to achieve fast and robust results. Visual features including points and lines are extracted and tracked using optical flow method. For line segment features, Plucker coordinates are employed for 3D spatial line and optimized using orthonormal representation. Furthermore, point features and line endpoints are expressed using inverse depth. To fuse the data from visual sensors and inertial measurement units (IMU), the states including IMU states and 3D landmarks are optimized by minimizing the measurement residuals including visual re-projection error and pre-integrated IMU error in a sliding window. Our proposal is evaluated on EuRoc datasets and compared with state of the art methods. Our method can achieve more robust performance in most of the experiments while maintaining its speed and accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call