Abstract

To solve the problem that the 3DMM parameter fitting methods cannot generate realistic 3D face, a single-image realistic 3D face reconstruction method based on deep learning is proposed. Firstly, the RP-Net regression network is constructed, and a dataset containing 50 000 face images is constructed at the same time. The parameters are learned from the input images, and the face model is fitted to generate the 3D face geometry. Secondly, weakly supervised learning is performed by constructing a multi-level loss function, which includes low-level pixel loss, landmark loss, and high-level identity loss. Thirdly, a realistic face texture is generated by means of texture mapping. Finally, two real face data and one generated data are used to compare experiments with recent 3D face reconstruction methods. The factors affecting the reconstruction such as lighting, expression, and steering are used to test proposed method, and quantitatively evaluate the reconstruction by SSIM and PSNR. These results show that proposed method can generate accurate face shapes and realistic face textures. Compared with the recent 3D face reconstruction method, the training time and number of iterations of the proposed method are reduced by 6% and 13%, respectively, the SSIM value is increased by 0.005‒0.010, and the PSNR value is increased by 0.03‒0.08 dB on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call