Abstract
We propose a camera calibration to generate a high-quality and photorealistic 3D (dimension) volumetric model using several low-cost RGB-D (depth) cameras located in a space. We show an efficient workflow to register a model and propose iterative calibration to construct it. Using multiple frames, calibration in the vertical direction between the upper and lower cameras is performed. After selecting any four pairs, the calibration is performed while rotating with the vertical calibration results from other adjacent viewpoints. After performing calibration between each camera pair, the calibration is repeated by generating a virtual viewpoint between each camera pair. The error function between 3D coordinates of feature points acquired from the RGB image is obtained, and an attempt is made to minimize this. When the error converges below a threshold value by optimizing the error function, calibration ends, and the final extrinsic parameter is obtained. After performing 3D reconstruction using the proposed calibration, a 3D point cloud is produced. Finally, a simple and efficient refinement is proposed to improve the 3D point cloud quality. We show the advantage of the proposed technique by quantitatively comparing the calibration results using two ground truth data and 3D reconstruction results in the experimental results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.