Abstract

Efficient image super-resolution (SR), being preferred in the resource-constrained scenarios, aims at not only higher super-resolving accuracy but also lower computational complexity. Taking the perception capability of deep networks into account, efficiently and effectively obtaining the large receptive field is a key principle for this task. Thus, in this paper, we integrate the multi-scale receptive field design with information distillation structure and attention mechanism, and develop a lightweight Multi-Scale Information Distillation (MSID) network. In detail, we design a multi-scale feature distillation (MSFD) block by employing multi-scale convolutions with different kernels into feature distillation connection, which effectively distills information from multiple receptive fields with low computational cost for better feature refinement. Moreover, we construct a scalable large kernel attention (SLKA) block via scaling attentive fields across network layers, that possesses large and scalable receptive field in attention to discriminatively enhance the distilled features. Extensive quantitative and qualitative evaluations on benchmark datasets validate the effectiveness of each proposed component and also demonstrate the superiority of our MSID network over state-of-the-art efficient SR methods. The code is available at https://github.com/YuanfeiHuang/MSID.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call