Abstract

The studies of previous decades have shown that the quality of depth maps can be significantly lifted by introducing the guidance from intensity images describing the same scenes. With the rising of deep convolutional neural network, the performance of guided depth map super-resolution is further improved. The variants always consider deep structure, optimized gradient flow and feature reusing. Nevertheless, it is difficult to obtain sufficient and appropriate guidance from intensity features without any prior. In fact, features in the gradient domain, e.g., edges, present strong correlations between the intensity image and the corresponding depth map. Therefore, the guidance in the gradient domain can be more efficiently explored. In this paper, the depth features are iteratively upsampled by 2&#x00D7;. In each upsampling stage, the low-quality depth features and the corresponding gradient features are iteratively refined by the guidance from the intensity features via two parallel streams. Then, to make full use of depth features in the image and gradient domains, the depth features and gradient features are alternatively complemented with each other. Compared with state-of-the-art counterparts, the sufficient experimental results show improvements according to the objective and subjective assessments. The code is available at <uri>https://github.com/Yifan-Zuo/MIG-net-gradient_guided_depth_enhancement</uri>.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call