Abstract

In many scenes, the frontal face image is the only criterion for judging the identity of a person. However, it is difficult to collect a standard frontal image in an uncontrolled environment. To get a clear frontal image from a large variety of profile images, there are many studies on face frontalization. Some researches need three-dimension face data or prior pose information while others do not take into account the effect of pose information. And there are restrictions on the number of poses of input face images. Because of the ill-consideration of pose information, the authenticity of generated frontal face images is not high when we input multi-poses profile images. To resolve this problem, this paper proposes a Pose-weighted Generative Adversarial Network (PWGAN), which adds a pre-trained pose certification module to learn face pose information. For the single input image, PWGAN combines fusion features with pose features. And for multiple input images, PWGAN uses pose information to dynamic distribute weights when fusing feature maps. PWGAN makes full use of pose information to make the generation network learn more about facial features and get better-generating effect. Through contrastive experiments, this paper proves that PWGAN has a better effect on multi-poses face frontalization than the above methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call