Abstract

ABSTRACTThe recent fast development in computer vision and mobile sensor technology such as mobile LiDAR and RGB-D cameras is pushing the boundary of the technology to suit the need of real-life applications in the fields of Augmented Reality (AR), robotics, indoor GIS and self-driving. Camera localization is often a key and enabling technology among these applications. In this paper, we developed a novel camera localization workflow based on a highly accurate 3D prior map optimized by our RGB-D SLAM method in conjunction with a deep learning routine trained using consecutive video frames labeled with high precision camera pose. Furthermore, an AR registration method tightly coupled with a game engine is proposed, which incorporates the proposed localization algorithm and aligns the real Kinetic camera with a virtual camera of the game engine to facilitate AR application development in an integrated manner. The experimental results show that the localization accuracy can achieve an average error of 35 cm based on a fine-tuned prior 3D feature database at 3 cm accuracy compared against the ground-truth 3D LiDAR map. The influence of the localization accuracy on the visual effect of AR overlay is also demonstrated and the alignment of the real and virtual camera streamlines the implementation of AR fire emergency response demo in a Virtual Geographic Environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call