Abstract
Generative Adversarial Network (GAN) based facial attribute editing has been successfully applied to many real world applications. However, most of existing methods suffer from semantic entanglement and imprecise editing when handling multiple facial attributes. The situation is worse when the samples with minority attribute values are insufficient, and majority attribute values dominate the learning easily. A stacked conditional GAN (cGAN) is proposed in this study aiming at solving these problems. Multiple attribute editing is broken down into several single attribute editing tasks which have been learned by base cGANs individually. Moreover, samples with a minority attribute value are paid more attention in learning. This proposed method not only reduces the difficulty of multiple attribute editing but also mitigates the imbalance problem. The residual image learning is applied to our model to reduce the difficulty of the image generation. The superiority of our model is demonstrated and compared with popular GAN-based facial attribute editing methods experimentally in terms of image quality, editing accuracy and training cost. The results confirm that our proposed model outperforms the other methods, especially in imbalance situations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.