Abstract

Super resolution (SR) image generation is to generate a high-resolution image from a given low-resolution image, which can be used in numerous vision tasks, such as small object detection and specific image processing. Generative adversarial network for super resolution (SRGAN) is the mainstream framework in SR task, which utilizes two reconstruction losses, including MSE loss and VGG loss. However, the influence of these losses on model learning is not examined. In this paper, the importance of MSE and VGG losses is analyzed. The loss function of traditional SRGAN is improved, and the weights of MSE and VGG losses are set manually. The weights range from 0 to 1, and the sampling interval is 0.1. Furthermore, a learnable parameter to dynamically adjust the two weights is proposed. Experiments on the datasets, including Set5, Set14, BAS100, and Urban100, show that our method is capable of generating much better images than SRGAN, with higher values of PSNR and SSIM. It is found that the MSE loss makes a greater contribution to learning the discriminative model and the VGG loss plays a supplementary role. Our WSRGAN can apply to most SRGAN-based methods to improve their accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call