Abstract

3-D Morphable model (3DMM) has widely benefited 3-D face-involved challenges given its parametric facial geometry and appearance representation. However, previous 3-D face reconstruction methods suffer from limited power in facial expression representation due to the unbalanced training data distribution and insufficient ground-truth 3-D shapes. In this article, we propose a novel framework to learn personalized shapes so that the reconstructed model well fits the corresponding face images. Specifically, we augment the dataset following several principles to balance the facial shape and expression distribution. A mesh editing method is presented as the expression synthesizer to generate more face images with various expressions. Besides, we improve the pose estimation accuracy by transferring the projection parameter into the Euler angles. Finally, a weighted sampling method is proposed to improve the robustness of the training process, where we define the offset between the base face model and the ground-truth face model as the sampling probability of each vertex. The experiments on several challenging benchmarks have demonstrated that our method achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call