Abstract

To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only.

Highlights

  • Localization and navigation have attracted much attention in recent years with respect to a wide range of applications, for self-driving cars, service robots, and unmanned aerial vehicles, etc

  • They have obvious respective drawbacks: global navigation satellite systems (GNSSs) only provide reliable localization information if there is a clear sky view [6]; laser lidar suffers from a reflection problem for objects with glass surfaces [7]; measurements from civilian inertial measurement units (IMUs) are noisy, such that inertial navigation systems may drift quickly due to error accumulation [8]; and monocular simultaneous localization and mapping (SLAM) can only recover the motion trajectory up to a certain scale and it tends to be lost when the camera moves fast or illumination changes dramatically [9,10,11]

  • We evaluated our point–line visual–inertial odometry (PL-visual–inertial odometry (VIO)) system using two public benchmark datasets: the EuRoc micro aerial vehicle (MAV)

Read more

Summary

Introduction

Localization and navigation have attracted much attention in recent years with respect to a wide range of applications, for self-driving cars, service robots, and unmanned aerial vehicles, etc. Optimization-based approaches can repeat the linearization of a state vector at different points to achieve higher accuracy than filtering-based methods [14]. Kong et al [25] built a stereo VIO system combining point and line features by utilizing trifocal geometry In our proposed PL-VIO method, we integrate line features into the optimization framework in order achieve higher accuracy than filtering-based methods. To build a structural 3D map and obtain the camera’s motion, we propose the PL-VIO system, which optimizes the system states by jointly minimizing the IMU pre-integration constraints together with the point and line re-projection errors in sliding windows. To tightly and efficiently fuse the information from visual and inertial sensors, we introduce a sliding window model with IMU pre-integration constraints and point/line features.

Notations
IMU Pre-Integration
Geometric Representation of Line
Plücker Line Coordinates
Orthonormal Representation
Tightly-Coupled Visual–Inertial Fusion
Sliding Window Formulation
IMU Measurement Model
Point Feature Measurement Model
Line Feature Measurement Model
Monocular Visual Inertial Odometry with Point and Line Features
Front End
Back End
Implementation Details
Experimental Results
EuRoc MAV Dataset
PennCOSYVIO Dataset
Computing Time
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.