Abstract

There are many applications of the combination of IMU (Inertial Measurements Unit) and camera in fields of electronic image stabilization, enhancement reality and navigation where camera-IMU relative pose calibration is one of the key technologies, which may effectively avoid the cases of insufficient feature points, unclear texture, blurred image, etc. In this paper, a new camera-IMU relative pose calibration method is proposed by establishing a BP neural network model. Thus we can obtain the transform from IMU inertial measurements to images and achieve camera-IMU relative pose calibration. The advantage of our method is the application of BP neural network using Levenberg-Marquardt algorithm, avoiding more complex calculations for the whole process. And it is convient for the application of camera-IMU combination system. Meanwhile, nonlinearities and noises are compensated while training and the impact of gravity can be ignored. Our experimental results demonstrated that this method can achieve camera-IMU relative pose calibration and the accuracy of quaternion estimation has reached about 0.01.

Highlights

  • We can’t get correct camera motion trajectory by a single visual sensor in cases that image features are scarce, the texture is not obvious, the image is blurred and illumination changes, etc., which may influence the effect of image stabilization and tracking

  • There are many applications of the combination of IMU (Inertial Measurements Unit) and camera in fields of electronic image stabilization, enhancement reality and navigation where camera-IMU relative pose calibration is one of the key technologies, which may effectively avoid the cases of insufficient feature points, unclear texture, blurred image, etc

  • Traditional camera-IMU relative pose calibration methods are mostly based on Extend Kalman Filter (EKF) (Li M. & Mourikis A I., 2000; Jia & Evans, 2007; Li M., Kim B H & Mourikis A I., 2013; Jia C. & Evans B L., 2014.) or unscented Kalman Filter (UKF) (SJ Julier & JK Uhlmann, 2004)

Read more

Summary

Introduction

We can’t get correct camera motion trajectory by a single visual sensor in cases that image features are scarce, the texture is not obvious, the image is blurred and illumination changes, etc., which may influence the effect of image stabilization and tracking. In many fields such as electronic image stabilization, enhancement reality, navigation etc., combination system of visual sensor and visual sensor is used They use IMU data to reflect the motion trajectory of the camera by estimating the rigid body transformation relationship between IMU and camera, among which camera-IMU relative pose calibration is very important. Jonathan Kelly (Kelly & Sukhatme, 2009) employs an unscented Kalman filter to estimate the relative pose of different sensors, which implemented based on hardware support devices. They have achieved camera-IMU calibration where the accuracy of their experimental translation error is 0.43cm, 0.23cm, 0.24cm in the x direction, the y direction and the z direction respectively, and the three angle error of Euler angel is all about 0.06 degree.

Prior Work
Alogorithm
Experimental Platform
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call