Abstract

Face recognition has experienced a flurry of advances with deep learning. However, training a model requires a lot of data. In order to meet this condition, some researchers use the 3D rendering technique to synthesize fake face images to expand the training data. Experimental results have demonstrated that this method is an effective way. There exist, however, dataset bias between the real 2D real face images and 3D synthesized face images. In this paper, we use Deep Transfer Network(DTN) to reduce dataset bias. First, we utilize the 3DMM face model to synthesize face images with various poses and natural expression. We choose the Inception-Resnet-V1 as our benchmark model. Then, we optimize our DTN based on maximum mean discrepancy(MMD) of the shared feature extraction layers and the discrimination layers. Our experiments demonstrate that the model jointly trained using synthesized images and real images is more robust than using either dataset (2D real faces or 3D synthesized faces). Furthermore, the performance obtained by our approach is comparable to the-state-of-the-art results to the systems trained on millions of real images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.