Abstract

Although developments in deep learning have resulted in considerable perormance enhancements in super-resolution (SR), they have also caused substantial increases in computational costs and memory requirements. Thus, various compression techniques, such as quantisation, pruning, and knowledge distillation (KD), for single SR models have been introduced. However, multiple SR models are required in the real world to robustly reconstruct low-resolution (LR) images of varying input sizes. Because of the limited resources, storing multiple models is impossible for mobile devices and embedded systems. In this letter, we propose a multi-scale SR network using weight-sharing method to effectively eliminate redundant parameters. To train our multi-scale SR network and mitigate SR performance degradation due to knowledge confusion, we divide backpropagation into two stages. Furthermore, we propose a compression framework that distils shared knowledge within a multi-scale SR network. We achieve a compression rate of 94% from storing multiple scales of single SR models, while only compromising 0.3 dB on average in terms of peak signal-to-noise ratio (PSNR).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call