Face super-resolution technology can significantly enhance the resolution and quality of face images, which is crucial for applications such as surveillance, forensics, and face recognition. However, existing methods often fail to fully utilize multi-scale information and facial priors, resulting in poor recovery of facial structures in complex images. To address this issue, we propose a face super-resolution method based on iterative collaboration between a facial reconstruction network and a landmark estimation network. This method employs a Multi-Convolutional Attention Block for multi-scale feature extraction, and an Attention Fusion Block is introduced to enhance features using facial priors. Subsequently, features are further refined using a Residual Window Attention Group. Furthermore, the method involves iterative collaboration between the facial reconstruction network and the landmark estimation network. At each step, landmark priors are used to generate higher quality images, which are then utilized for improved landmark estimation, thereby gradually enhancing performance. Through evaluation of the standard 4×, 8×, and 16× super-resolution tasks on the CelebA and Helen datasets, This method demonstrates strong performance and achieves competitive scores on SSIM, PSNR, and LPIPS metrics. Specifically, in the 8× super-resolution experiment, the PSNR/SSIM/LPIPS on CelebA dataset is 27.68dB/ 0.8112/0.0866, outperforming existing state-of-the-art methods in terms of both accuracy and visual quality.
Read full abstract