Abstract

Visual inertial odometry (VIO) gained lots of interest recently for efficient and accurate ego-motion estimation of robots and automobiles. With a monocular camera and an inertial measurement unit (IMU) rigidly attached, VIO aims to estimate the 3D pose trajectory of the device in a global metric space. We propose a novel visual inertial odometry algorithm which directly optimizes the camera poses with noisy IMU data and visual feature locations. Instead of running separate filters for IMU and visual data, we put them into a unified non-linear optimization framework in which the perspective reprojection costs of visual features and the motion costs on the acceleration and angular velocity from the IMU and pose trajectory are jointly optimized. The proposed system is tested on the EuRoC dataset for quantitative comparison with the state-of-the-art in visual-inertial odometry and on the mobile phone data as a real-world application. The proposed algorithm is conceptually very clear and simple, achieves good accuracy, and can be easily implemented using publicly available non-linear optimization toolkits.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call