Abstract

Depth maps generally suffer from large erroneous areas even in public RGB-Depth datasets. Existing learning-based depth recovery methods are limited by insufficient high-quality datasets and optimization-based methods generally depend on local contexts not to effectively correct large erroneous areas. This paper develops an RGB-guided depth map recovery method based on the fully connected conditional random field (dense CRF) model to jointly utilize local and global contexts of depth maps and RGB images. A high-quality depth map is inferred by maximizing its probability conditioned upon a low-quality depth map and a reference RGB image based on the dense CRF model. The optimization function is composed of redesigned unary and pairwise components, which constraint local structure and global structure of depth map, respectively, with the guidance of RGB image. In addition, the texture-copy artifacts problem is handled by two-stage dense CRF models in a coarse-to-fine way. A coarse depth map is first recovered by embedding RGB image in a dense CRF model in unit of 3×3 blocks. It is refined afterward by embedding RGB image in another model in unit of individual pixels and restricting the model mainly work in discontinued regions. Extensive experiments on six datasets verify that the proposed method considerably outperforms a dozen of baseline methods in correcting erroneous areas and diminishing texture-copy artifacts of depth maps.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.