Depth map super-resolution (DSR) is a technique aimed at restoring high-resolution (HR) depth maps from low-resolution (LR) depth maps. In this process, color images are commonly used as guidance to enhance the restoration procedure. However, the intricate degradation of LR depth poses a challenge, and previous image-guided DSR approaches, which implicitly model the degradation in the spatial domain, often fall short of producing satisfactory results. To address this challenge, we propose a novel approach called the Degradation-Guided Multi-modal Fusion Network (DMFNet). DMFNet explicitly characterizes the degradation and incorporates multi-modal fusion in both spatial and frequency domains to improve the depth quality. Specifically, we first introduce the deep degradation regularization loss function, which enables the model to learn the explicit degradation from the LR depth maps. Simultaneously, DMFNet converts the color images and depth maps into spectrum representations to provide comprehensive multi-domain guidance. Consequently, we present the multi-modal fusion block to restore the depth maps by leveraging both the RGB-D spectrum representations and the depth degradation. Extensive experiments demonstrate that DMFNet achieves state-of-the-art (SoTA) performance on four benchmarks, namely the NYU-v2, Middlebury, Lu, and RGB-D-D datasets.
Read full abstract