Abstract
Face super-resolution is the task of generating high-resolution (HR) face images from the low-resolution (LR) inputs. Recently, deep learning-based methods have shown remarkable progress in the super-resolution (SR) field. Most of the methods perform additional tasks such as face parsing, landmark, and attention to generate the HR images. However, parsing maps and landmark-guided models require the supplementary labeled dataset, which is difficult to obtain in real life. The attention mechanism does not require the dataset’s extra labeling and is also beneficial for face SR. These methods focus on a few critical features and ignore the remaining ones, which sometime causes to ignore the valuable features. Therefore, this paper proposes a novel deep HyFeat-based Attention in Attention model for face SR. Moreover, the proposed model uses the coarse SR network and deep convolutional neural network (CNN) to generate the HR image. A coarse SR network is applied to upsample the LR image and generate the coarse super-resolved image, which is further sent to the deep CNN model. The proposed work incorporates the Hybrid Feature Attention in Attention unit (HyFA <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> U), which consists of Hybrid Feature block (HyFeat) and Attention in Attention block (A <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> B) in the deep CNN model to improve the visual quality of the output face images. HyFeat block assists the model in extracting the coarse features and learning the enriched contextual information to enhance the details of coarse features. Attention in Attention block preserves the attentive and non-attentive beneficial features while suppressing the unwanted features. The attention branch focuses on specific facial features and ignores the rest of the features. The non-attention branch aims to learn the informative features that the attention branch ignores. The proposed model repeats the HyFA <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> U units to focus on different facial components and enhance the features to improve the quality of resultant faces. Experimental outcomes exhibit that the proposed model gains state-of-the-art performance on the standard datasets, namely CelebAHQ, Helen, FFHQ, and LFW face. The proposed method achieves an improvement of more than 0.35 dB in PSNR and 0.012 in SSIM on different datasets over the best models available in the literature.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Instrumentation and Measurement
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.