Abstract
Scene depth super-resolution (DSR) poses an inherently ill-posed problem due to the extremely large space of one-to-many mapping functions from a given low-resolution (LR) depth map, which possesses limited depth information, to multiple plausible high-resolution (HR) depth maps. This characteristic renders the task highly challenging, as identifying an optimal solution becomes significantly intricate amidst this multitude of potential mappings. While simplistic constraints have been proposed to address the DSR task, the relationship between LR and HR depth maps and the color image has not been thoroughly investigated. In this paper, we introduce a novel mapping constraint network (MCNet) that incorporates additional constraints derived from both LR depth maps and color images. This integration aims to optimize the space of mapping functions and enhance the performance of DSR. Specifically, alongside the primary DSR network (DSRNet) dedicated to learning LR-to-HR mapping, we have developed an auxiliary degradation network (ADNet) that operates in reverse, generating the LR depth map from the reconstructed HR depth map to obtain depth features in LR space. To enhance the learning process of DSRNet in LR-to-HR mapping, we introduce two mapping constraints in LR space: 1) the cycle-consistent constraint, which offers additional supervision by establishing a closed loop between LR-to-HR and HR-to-LR mappings, and 2) the region-level contrastive constraint, aimed at reinforcing region-specific HR representations by explicitly modeling the consistency between LR and HR spaces. To leverage the color image effectively, we introduce a feature screening module (FSM) to adaptively fuse color features at different layers, which can simultaneously maintain strong structural context and suppress texture distraction through subspace generation and image projection. Comprehensive experimental results across synthetic and real-world benchmark datasets unequivocally demonstrate the superiority of our proposed method over state-of-the-art DSR methods. Our MCNet achieves an average MAD reduction of 3.7% and 7.5% over state-of-the-art DSR method for \(\times\) 8 and \(\times\) 16 cases on Milddleburry dataset, respectively, without incurring additional costs during inference.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Multimedia Computing, Communications, and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.