Abstract

Face rotation has been a challenging task due to the variation of head poses, illumination, and occlusions, etc. This work provides a novel Identity-and-Pose-Guided Generative Adversarial Network (IPG-GAN) to generate faces with arbitrary head poses. Besides the proposed architecture, an adversarial head pose estimation loss is newly introduced to make IPG-GAN learn the precise head pose representation, and further improve the face synthesis with the employed adversarial identity classification loss. A dual training strategy is adopted to learn the mutual transformation of identity and head pose representation between two source images in one iteration. Under the supervision of such training, the pose-robust identity representation is learned, and the domain knowledge of the learned representation of identity and head pose is properly distinguished. Additionally, based on the two synthesis approaches, IPG-GAN proposes two strategies of frontalizing the profile probe faces and rotating the faces to profile views for both recognition and verification, which differs from most previous face rotation methods. Quantitative and qualitative experiments on Multi-PIE, LFW, and CFP databases for face synthesis, recognition, and verification show that the proposed method achieves state-of-the-art performance and yields substantial improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call