Abstract

Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel-splitting super-resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.