Abstract

Depth map super-resolution (DMSR) is an effective solution to improve the quality of depth maps captured by low-cost depth sensors. Most existing methods introduce guidance from the RGB images of the same scene and achieve significant improvements. However, how to utilize the RGB information is still an open challenge because of the structure inconsistencies between RGB images and depth maps. In this letter, we present a novel convolutional neural network with deformable enhancement and adaptive fusion, termed DEAF-Net, to further improve the performance of DMSR. Specifically, we design a deformable convolution enhancement module, in which sufficient color features are used for enhancing depth features. An adaptively feature fusion module is exploited to improve the efficiency of fully connected feature fusion. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call