Abstract

Mobile devices equipped with a monocular camera and an inertial measurement unit (IMU) are ideal platforms for augmented reality (AR) applications. However, nontrivial noises in low-cost IMUs, which are usually equipped in consumer-level mobile devices, could lead to large errors in pose estimation and in turn significantly degrade the user experience in mobile AR apps. In this study, we propose a novel monocular visual-inertial state estimation approach for robust and accurate pose estimation even for low-cost IMUs. The core of our method is an IMU pre-integration correction approach which effectively reduces the negative impact of IMU noises using the visual constraints in a sliding window and the kinematic constraint. We seamlessly integrate the IMU pre-integration correction module into a tightly-coupled,sliding-window based optimization framework for state estimation. Experimental results on public dataset EUROC demonstrate the superiority of our method to the state-of-the-art VINS-Mono in terms of smaller absolute trajectory errors (ATE) and relative pose errors (RPE). We further apply our method to real AR applications on two types of consumer-level mobile devices equipped with low-cost IMUs, i.e. an off-the-shelf smartphone and an AR glass. Experimental results demonstrate that our method can facilitate robust AR with little drifts on the two devices.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.