Abstract

Generative adversarial networks (GANs) have recently achieved impressive results in facial age synthesis. However, these methods usually select an autoencoder-style generator. And the bottleneck layer in the encoder-decoder generally gives rise to blurry and low-quality generation. To address this limitation, we propose a novel attention-aware conditional generative adversarial network (ACGAN). First, we utilize two different attention mechanisms to improve generation quality. On one hand, we integrate channel attention modules into the generator to enhance the discriminative representation power. On the other hand, we introduce a position attention mask to well-process images captured with various backgrounds and illuminations. Second, we deploy a local discriminator to enhance the central face region with informative details. Third, we adopt three types of losses to achieve accurate age generation and preserve personalized features: 1) The adversarial loss aims to synthesize photo-realistic faces with expected aging effects; 2) The identity loss intends to keep identity information unchanged; 3) The attention loss tries to improve the accuracy of attention mask regression. To assess the effectiveness of the proposed method, we conduct extensive experiments on several public aging databases. Experimental results on MORPH, CACD, and FG-NET show the effectiveness of the proposed framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.