Abstract

Despite face frontalization developing rapidly, limited image pairs and low image quality are still obstacles to face frontalization. Although single-view-based face frontalization does not have the constraints of image pairs, it reduces the exploration of unknown data. We alleviate the face distortion problem based on single-view face frontalization from two aspects. One is that we designed a dual-mode face transformation model for model training based on the loss of facial features from different angles. These two training modes cope with the question of face frontalization in different distortions. Specifically, we apply the 3D head model to the data transformation by fitting the 2D image to the 3D model. Rotating the 3D model at the designed angle to realize the face transformation. We judge the distortion extent with Euler Angle and produce the two training modes corresponding to the dual-mode face transformation. On the other hand, we construct a perceptual loss module to preserve the detailed information of the front view. This module combines content and identity loss to generate the photorealistic frontal image. In qualitative and quantitative experiments, we conduct experiments to illustrate the advantages of the proposed model on controlled and field datasets. Further, we apply full-reference and no-reference image quality assessment methods to estimate the quality of images generated by different face frontalization models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call