Abstract

The increasing computing and sensing capabilities of modern mobile phones have spurred research interests in developing new visual–inertial odometry (VIO) techniques to turn a smartphone into a self-contained vision-aided inertial navigation system for various applications. Smartphones nowadays use cameras with optical image stabilization (OIS) technology to reduce image blurs. However, the mechanism may result in varying camera intrinsic parameters (CIP), which must be taken into account in VIO computation. In this article, we first develop a linear model to relate the CIP with the inertial measurement unit measured acceleration. Based on the model, we introduce a new VIO method, called CIP-VMobile, which treats CIP as state variables and tightly couples them with other state variables in a graph optimization process to estimate the optimal state. The method uses the linear model to construct a factor graph and uses the linear-model-computed values as initial CIP estimates to speed up the VIO computation and attain a better pose estimation result. Simulation and experimental results with an iPhone 7 validate the method's efficacy. Based on CIP-VMobile, we fabricated a robotic navigation aid (RNA) based on an iPhone 7 for assisted navigation. Experimental results with the RNA demonstrate CIP-VMobile's promise in real-world navigation applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call