Abstract

The depth maps captured by sensors always suffer from low resolution and random noise. Recently, by introducing the guidance from the color image, deep convolutional neural network (DCNN) shows significant improvements for depth map enhancement. However, most DCNN-based methods do not make full use of multi-scale guidance from the color image, thereby achieving sub-optimal performances. In this paper, we propose a novel DCNN to progressively reconstruct the high-resolution depth map guided by the intensity image. Specially, the multi-scale intensity features are extracted to provide guidance for the refinement of depth features as their resolutions are gradually enhanced. Furthermore, local residual learning and global residual learning are adopted in the output of each up-sampling sub-network and the whole network respectively. Such design can recover the high-frequency details from coarse to fine. In addition, according to the contiguous memory mechanism, the dense connections are designed to take the low-level features and the high-level features into account which further exploits the guidance from the intensity image. To balance the resource expense and the performance, the dimension reduction units are used to efficiently represent the features. The proposed network is compared with 17 state-of-the-art methods which shows improved performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call