Abstract

Facial attribute editing encounters problems of incorrect changes to face regions and artifacts in generated images. We propose a facial attribute editing method combined with parallel GAN for attribute separation. First, the method integrates the U2-net encoder and Trans-GAN decoder as a model encoder to extract and generate facial spatial information effectively. Second, RGB images and semantic mask images are used to train a parallel generator and discriminator respectively. Semantic consistency loss is introduced to ensure that the two branches have consistent semantic output and achieve the effect of convergence in the same direction parallel generator and discriminator. The proposed model, trained on the CelebAMask-HQ original dataset and verified by the CelebA dataset, adopts the separation of the face mask image and the background mask image, to improve the correct rate of face attribute editing. Compared with existing facial attribute editing methods, the proposed method is capable of balancing attribute editing ability and details preservation ability. It can accurately edit the target attribute area and greatly improve the quality of facial images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call