At present, some face super-resolution works do not consider the degradation factors of face images in reality, and the resolution of reconstructed face images is not high enough, and there are problems such as blur, smoothness, artifacts, and unclear details. This paper uses adaptive activation function and global attention mechanism to propose an adaptive global residual network AGRNet for face super-resolution reconstruction; then uses a formula with random parameters to simulate the process of face degradation in reality, and uses multi-scale discriminators and composite loss functions to expand it to AGRNet-HR, which can reconstruct degraded faces in reality with higher resolution. The SSIM value and PSNR value of AGRNet on the Helen test set are 0.8352 and 27.54 respectively, the LPIPS value of AGRNet-HR on the CelebAHQ test set is 0.2633, and the FID value on the real test set composed of low-resolution faces and old photos in CelebA is 26.27. Comparing the index values and image visual effects of the experimental results with mainstream methods, it shows that AGRNet and AGRNet-HR are more competitive, and the effectiveness of the key modules of the model is verified through ablation experiments.
Read full abstract