Abstract

In contrast to conventional methods of optimizing feature-matching results at the data-processing level, we utilize the tightly coupled mode to fuse the information collected by the camera and inertial sensor at the data-gathering level, which contributed to better accuracy and robustness in the feature-matching process. Specifically, the proposed visual-inertial feature tracking method that combines inertial measurement unit (IMU) calibration, feature matching, and prediction algorithms. The approach includes a vision-aided multi-level IMU systemic calibration method and an inertial-aided image feature prediction algorithm, which effectively processes and utilizes information from multiple sensors. Our method addresses not only the influence of image distortion and blur resulting from illumination changes and fast camera motion but also the problem of measurement errors that can arise from the long-term operation of the inertial sensors. Extensive experiments have been conducted to demonstrate that the efficiency of it is superior to the state-of-the-art methods: With the accuracy rate remains the same level, the speed of feature matching is improved by 41.8%. Additionally, when it is applied to simultaneous localization and mapping systems, its localization performances are better than that of the VINS-mono method by 8.1%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call