Abstract

Guided depth super-resolution aims at using a low-resolution depth map and an associated high-resolution RGB image to recover a high-resolution depth map. However, restoring precise and sharp edges near depth discontinuities and fine structures is still challenging for state-of-the-art methods. To alleviate this issue, we propose a novel multi-stage depth super-resolution network, which progressively reconstructs HR depth maps from explicit and implicit high-frequency information. We introduce an efficient transformer to obtain explicit high-frequency information. The shape bias and global context of the transformer allow our model to focus on high-frequency details between objects, i.e., depth discontinuities, rather than texture within objects. Furthermore, we project the input color images into the frequency domain for additional implicit high-frequency cues extraction. Finally, to incorporate the structural details, we develop a fusion strategy that combines depth features and high-frequency information in the multi-stage-scale framework. Exhaustive experiments on the main benchmarks show that our approach establishes a new state-of-the-art. Code will be publicly available at https://github.com/wudiqx106/DSR-EI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call