Abstract

Accurate segmentation of 3D reconstruction using multiple sensors is one of the most important fundamental steps in autopilot and robot navigation. However, in the case of shaking camera feeds, fewer feature and moving objects in the scene, traditional 3D reconstruction systems using visual sensors can easily lead to misalignment and drift of the 3D model. Existing 3D reconstruction systems based on Visual-Inertial Odometry (VIO) use complex algorithms, resulting in large memory consumption that limit applicability on mobile devices. To address the challenge, we propose a 3D reconstruction system using visual and Inertial Measurement Unit (IMU) sensors based on Multi-State Constraint Kalman Filter (MSCKF). Specifically, we employ visual and IMU sensors by fusing data to improve the accuracy of 3D reconstruction models. In addition, our proposed 3D reconstruction system maintains image information and delay linearization for a period of time in the state vector that not only ensures the accuracy of the algorithm, but makes more efficient use of resources. We also compare the performance of our approach with state-of-the-art methods on the dataset and our sensors. Our results show that this proposed 3D reconstruction system using visual and IMU sensors outperforms all previous methods by a large margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call