Abstract

This paper presents a scalable model-based approach for 3D scene reconstruction using a moving RGB-D camera. The proposed approach enhances the accuracy of pose estimation due to exploiting the rich information in the multi-channel RGB-D image data. Our approach has lots of advantages on the reconstruction quality of the 3D scene as compared with the conventional approaches using sparse features for pose estimation. The pre-learned image-based 3D model provides multiple templates for sampled views of the model, which are used to estimate the poses of the frames in the input RGB-D video without the need of a priori internal and external camera parameters. Through template-to-frame registration, the reconstructed 3D scene can be loaded in an augmented reality (AR) environment to facilitate displaying, interaction, and rendering of an image-based AR application. Finally, we verify the ability of the established reconstruction system on publicly available benchmark datasets, and compare it with the sate-of-the-art pose estimation algorithms. The results indicate that our approach outperforms the compared methods on the accuracy of pose estimation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.