Abstract

Simultaneous localization and mapping (SLAM) technology is used in many applications, such as augmented reality (AR)/virtual reality, robots, drones, and self-driving vehicles. In AR applications, rapid camera motion estimation, actual size, and scale are important issues. In this research, we introduce a real-time visual–inertial SLAM based on an adaptive keyframe selection for mobile AR applications. Specifically, the SLAM system is designed based on the adaptive keyframe selection visual–inertial odometry method that includes the adaptive keyframe selection method and the lightweight visual–inertial odometry method. The inertial measurement unit data are used to predict the motion state of the current frame and it is judged whether or not the current frame is a keyframe by an adaptive selection method based on learning and automatic setting. Relatively unimportant frames (not a keyframe) are processed using a lightweight visual–inertial odometry method for efficiency and real-time performance. We simulate it in a PC environment and compare it with state-of-the-art methods. The experimental results demonstrate that the mean translation root-mean-square error of the keyframe trajectory is 0.067 m without the ground-truth scale matching, and the scale error is 0.58% with the EuRoC dataset. Moreover, the experimental results of the mobile device show that the performance is improved by 34.5%–53.8% using the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call