Abstract
With the development of convolutional neural networks (CNNs), deep learning-based face super-resolution (FSR) approaches have achieved remarkable results in recent years. However, existing FSR methods often rely on face prior knowledge, which greatly increases network complexity and computation. In this paper, we propose DAFNet, a dual attention fusion-based FSR network comprising numerous face attention fusion modules (FAFMs). FAFM is a residual structure containing attention mechanism, which is divided into feature branches and attention branches. The feature branch introduces an hourglass block to extract multi-scale information, while the attention branch incorporates channel attention and spatial attention in series. This design ensures that the network prioritizes important information and effectively recovers detailed facial features. Luminance-chrominance error loss and gradient loss are introduced to guide the training process. Additionally, adversarial loss and perceptual loss are incorporated to enhance the recovery of visually realistic face images. Notably, our method can produce clear faces at a high-scale factor of eight times without relying on any facial prior information, effectively reducing network complexity. Quantitative and qualitative experiments conducted on the CelebA and Helen datasets underscore the effectiveness of the proposed model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.