Abstract

Single-Image Super-Resolution (SISR) has always been an important topic in the field of image processing, which attempts to improve the image resolution and is of great significance in practice. Recently, SISR has made substantial progress aided by deep learning (DL), which has demonstrated impressive potential in many low-level tasks. In the current DL-based SISR approaches, most of them are based on supervised learning. However, in the real world, only low-resolution (LR) images with unknown degradation are provided, which limit the application of current supervised models. To mitigate this problem, in this paper, a two-stage semi-supervised SISR method called SRAttentionGAN, is proposed. First, an upsampling network SRResNet, which is pre-trained in a supervised manner, is employed to scale the LR image to the desired size. Then, the upsampled results are fed into our improved unsupervised CycleGAN framework, which does not need paired samples, to obtain sharper and more realistic super-resolution (SR) images. Specifically, in the improved CycleGAN part, an attention-guided generator is proposed to perceive the discriminative semantic parts between the source and target images, to avoid the impact from low-level information. It also prevents the overall color tone from being changed. A multi-scale discriminator is also adopted to further rich texture details. The effectiveness of the proposed SRAttentionGAN experiments is validated using four benchmarks (Set5, Set14, Urban100, and BSDS100) in both quantitative and qualitative aspects. Compared with the state-of-the-arts, the results are visually promising and show competitive performance in perceptual metrics, Natural Image Quality Evaluator (NIQE) and Perception Index (PI), which have better agreement with the human visual perception.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call