Abstract

Face alignment and reconstruction are classical problems in the computer vision field, one of the greatest difficulties of which is the limited number of facial images with landmark points. The 300 W-LP dataset is the most commonly used for the existing methods of single-view 3D Morphable Model (3DMM)-based reconstruction; however, the model performance is limited by the small variety of facial images in this dataset. In this work, a 3D facial image dataset with landmark points generated by the rotate-and-render method is proposed. The key innovation of the proposed method is that the back-and-forth rotation of faces in 3D space and then re-rendering them to the 2D plane can provide strong self-supervision. The recent advances in 3D face modeling and high-resolution generative adversarial networks (GANs) are leveraged to constitute the blocks. To obtain more precise facial landmark points, the 3D dense face alignment (3DDFA) model is used to label the generated images and filter the landmark points. Finally, the 3DDFA model is retrained using the proposed dataset, and an improved result is achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call