Abstract

Nowadays, multi-sensor fusion technology is a fundamental prerequisite to achieve highly autonomous robots and robustness. Many studies have been conducted, such as visual–inertial odometry (VIO), integrated navigation, and LiDAR–inertial odometry. Typically, for VIO, gratifying results have been achieved, ascribing to the complementary sensing capabilities of inertial measurement units (IMUs) and cameras. However, this work mainly focuses on the fusion of visual and inertial data, while the IMU error is less considered, especially for low-cost or poorly calibrated microelectromechanical system (MEMS) IMU. Such errors may have a significant effect on the VIO performance. In this study, we compensated for the IMU using camera assistance. The key characteristic of the method is that we optimize the compensation parameters (scale factor) from coarse to fine by combining the time domain with the frequency domain. The proposed method is to use the time-domain and frequency-domain optimization to suppress large noise in the dynamic calibration process of the extremely low-cost sensor platform. The effectiveness of this method is validated through experiments and simulations. The minimal calibration error (0.46%) is commensurate with the advanced work. By feeding the compensated IMU into the VIO algorithm, the localization accuracy is improved by 9% to 15%. This method improves the performance in the VIO algorithm, which is equipped with the low-cost or poorly calibrated MEMS IMU and reduces the hardware and deployment costs of the system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call