Abstract
Our laboratory proposes a method for generating elemental images based on fusion of depth images and RGB images for integrated 3D display and has achieved certain results. However, due to the defects of commercial-grade depth cameras, accurate depth estimation of target geometry cannot be accomplished. We propose a deep learning method for estimating accurate depth data of geometry from a single RGB-D image for elemental image generation. Our proposed algorithm uses a deep convolutional network to infer surface normals, object masks and occlusion boundaries from a single RGB-D image as input. Then, based on these refined depth predictions combined with the input original depth image, the depth of all pixels, including the missing pixels in the original input depth image, is solved. Through experiments with various backbones in the proposed deep learning network structure, our resulting model has better completion on deep image inpainting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.