Abstract

In recent researches, monocular simultaneous localization and mapping (SLAM) remains a well-known technique for ego-motion tracking however it significantly suffers from scale drift. Depth estimation in a monocular vision system, which is yet a challenging factor, is relevant to this drift issue and hence monocular SLAM remains unsuitable for large scale mapping and localization. This paper presents a novel solution, a wearable and embedded EMoVI-SLAM system, to resolve scale drift through multi-sensor fusion architecture for integrating visual and inertial data, using monocular SLAM as basis of a visual framework. Firstly, the unknown scale parameter in a monocular vision system is addressed based on the IMU measurements, meanwhile gravity direction and gyroscope bias is initialized. Secondly, the estimated pose from monocular visual sensor and the IMU sensor is fused together using Unscented Kalman Filter (UKF). Furthermore, to minimize scale drift, the scale is re-computed after IMU bias errors exceeds the safe threshold limit. Finally, the experiments are carried out by mounting embedded SLAM system on a head-gear in two different test-environments for indoor and outdoor large-scale motion as well as on EuRoC dataset. Experiment results shows that proposed algorithm performs better than the state of the art visual inertial SLAM systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call