Abstract

In recent years, the performance of single image super-resolution (SISR) methods based on deep neural networks has significantly improved. However, large model sizes and high computational costs are common problems for most SR networks. Meanwhile, a trade-off exists between higher reconstruction fidelity and improved perceptual quality in solving the SISR problem. In this paper, we propose a multi-teacher knowledge distillation approach for SR (MTKDSR) tasks that can train a balanced, lightweight, and efficient student network using different types of teacher models that are proficient in terms of reconstruction fidelity or perceptual quality. In addition, to generate more realistic and learnable textures, we propose an edge-guided SR network, EdgeSRN, as a perceptual teacher used in the MTKDSR framework. In our experiments, EdgeSRN was superior to the models based on adversarial learning in terms of the ability of effective knowledge transfer. Extensive experiments show that the student trained by MTKDSR exhibit superior performance compared to those of state-of-the-art lightweight SR networks in terms of perceptual quality with a smaller model size and fewer computations. Our code is available at https://github.com/lizhangray/MTKDSR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call