Abstract
The increasing computing and sensing capabilities of modern mobile phones have spurred research interests in developing new visual–inertial odometry (VIO) techniques to turn a smartphone into a self-contained vision-aided inertial navigation system for various applications. Smartphones nowadays use cameras with optical image stabilization (OIS) technology to reduce image blurs. However, the mechanism may result in varying camera intrinsic parameters (CIP), which must be taken into account in VIO computation. In this article, we first develop a linear model to relate the CIP with the inertial measurement unit measured acceleration. Based on the model, we introduce a new VIO method, called CIP-VMobile, which treats CIP as state variables and tightly couples them with other state variables in a graph optimization process to estimate the optimal state. The method uses the linear model to construct a factor graph and uses the linear-model-computed values as initial CIP estimates to speed up the VIO computation and attain a better pose estimation result. Simulation and experimental results with an iPhone 7 validate the method's efficacy. Based on CIP-VMobile, we fabricated a robotic navigation aid (RNA) based on an iPhone 7 for assisted navigation. Experimental results with the RNA demonstrate CIP-VMobile's promise in real-world navigation applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: IEEE/ASME Transactions on Mechatronics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.