Abstract

Although high resolution (HR) depth images are required in many applications such as virtual reality and autonomous navigation, their resolution and quality generated by consumer depth cameras fall short of the requirements. Existing depth upsampling methods focus on extracting multiscale features of HR color image to guide low resolution (LR) depth upsampling, thus causing blurry and inaccurate edges in depth. In this paper, we propose a depth super-resolution (SR) network guided by blurry depth and clear intensity edges, called DSRNet. DSRNet differentiates effective edges from a number of HR edges with the guidance of blurry depth and clear intensity edges. First, we perform global residual estimation based on an encoder–decoder architecture to extract edge structure from HR color image for depth SR. Then, we distinguish effective edges from HR edges in the decoder side with the guidance of LR depth upsampling. To maintain edges for depth SR, we use intensity edge guidance that extracts clear intensity edges from HR image. Finally, we use residual loss to generate accurate high frequency (HF) residual and reconstruct HR depth maps. Experimental results show that DSRNet successfully reconstructs depth edges in SR results as well as outperforms the state-of-the-art methods in terms of visual quality and quantitative measurements.11The proposed model with some test image pairs are available in https://github.com/lanhui-123/DSRNet.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.