Abstract

Recently, convolutional neural network has been employed to obtain better performance in single image super-resolution task. Most of these models are trained and evaluated on synthetic datasets in which low-resolution images are synthesized with known bicubic degradation and hence they perform poorly on real-world images. However, by stacking more convolution layers, the super-resolution (SR) performance can be improved. But, such idea increases the number of training parameters and it offers a heavy computational burden on resources which makes them unsuitable for real-world applications. To solve this problem, we propose a computationally efficient real-world image SR network referred as RSRN. The RSRN model is optimized using pixel-wise $$L_1$$ loss function which produces overly-smooth blurry images. Hence, to recover the perceptual quality of SR image, a real-world image SR using generative adversarial network called RSRGAN is proposed. Generative adversarial network has an ability to generate perceptual plausible solutions. Several experiments have been conducted to validate the effectiveness of the proposed RSRGAN model, and it shows that the proposed RSRGAN generates SR samples with more high-frequency details and better perception quality than that of recently proposed SRGAN and $$\hbox {SRFeat}_{\textit{IF}}$$ models, while it sets comparable performance with the ESRGAN model with significant less number of training parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call