Abstract

This paper addresses the 3D face reconstruction and semantic annotation from a single-view noisy depth image. A deep neural network-based coarse-to-fine framework is presented to take advantage of 3D morphable model (3DMM) regression and per-vertex geometry refinement. The low-dimensional subspace coefficients of the 3DMM initialize the global facial geometry, being prone to be over-smooth because of the low-pass characteristics of the shape subspace. The proposed geometry refinement subnetwork predicts per-vertex displacements to enrich local details, which is learned from unlabelled noisy depth images based on the registration-like loss. In order to guarantee the semantic correspondence between the resultant 3D face and the depth image, a semantic consistency constraint is introduced to adapt an annotation model learned from the synthetic data to real noisy depth images. The resultant depth annotations are required to be consistent with the label propagation from the coarse and refined parametric 3D faces. The proposed coarse-to-fine reconstruction scheme and the semantic consistency constraint are evaluated on the depth-based 3D face reconstruction and semantic annotation. The series of experiments demonstrate that the proposed approach achieves the performance improvements over compared methods regarding 3D face reconstruction and depth image annotation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.