Abstract
In recent years, deep convolutional neural networks have played an increasingly important role in single-image super-resolution (SR). However, with the increase of the depth and width of networks, the super-resolution methods based on convolution neural networks are facing training difficulties, memory consumption, running slowness and other problems. Furthermore, most of the methods do not make full use of the image gradient information which leads to the loss of geometric structure information of the image. To solve these problems, we propose a gradient information distillation network in this paper. On the one hand, the advantages of fast and lightweight are maintained through information distillation. On the other hand, the SR performance is improved by gradient information. Our network has two branches named gradient information distillation branch (GIDB) and image information distillation branch. To combine features in both branches, we also introduce a residual feature transfer mechanism (RFT). Under the function of GIDB and RFT, our network can retain the rich geometric structure information which can make the edge details of the reconstructed image sharper. The experimental results show that our method is superior to the existing methods while well limits the parameters, computation and running time of the model. It provides the possibility for real-time image processing and mobile applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.