Abstract

Face swapping is a popular subject in face manipulation, which aims to replace the identity of the target face with that of the source face. Existing methods cannot well preserve facial attributes (e.g., pose, expression, skin color, illumination, make-up, occlusion, etc.) of the target face, causing noticeable temporal discontinuity and instability artifacts for video face swapping. In this paper, we propose a lightweight Generative Adversarial Networks based framework named AP-GAN, which can precisely control the attribute of the generated face to be consistent with that of the target face, achieving efficient and high-fidelity video face swapping. Specifically, we derive a U-Net based generator with ID blocks to translate identity and PE blocks to correct pose and expression. Besides, a PE-aware discriminator is designed to help supervise pose and expression of the synthetic face. Furthermore, we propose a discriminator based perceptual loss leveraging multi-scale features of the discriminator to preserve facial attributes like skin color, illumination, make-up and occlusion. AP-GAN is trained on Flickr-Faces-HQ, CelebA-HQ and VGGFace2 and evaluated on FaceForensics++. Extensive experiments and comparisons to the existing state-of-the-art face swapping methods demonstrate the efficacy of our framework. Comprehensive ablation studies are also carried out to isolate the validity of each proposed component and to contrast with other face manipulation approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call