Abstract

Currently, deep learning-based 3D face reconstruction methods have shown promising results. However, they ignore the contextual information of the face, which is a topologically unified entirety. This paper proposes a 3D face reconstruction approach based on hybrid-level contextual information. Firstly, we suggest a regression network with contextual modeling capability at the feature level, PPR-CNet, which adopts a preferential parameter regression to regress the 3DMM parameters dynamically based on their various impacts on the reconstructed 3D face. Furthermore, we design a contextual landmark loss to constrain the face geometry at the landmark level. We introduce a differentiable renderer combined with multiple loss functions for weakly-supervised training. Quantitative experiments on two benchmarks show our method outperforms several SOTA methods. Extensive qualitative experiments indicate that our method performs efficiently in realism, facial proportion, and occlusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call