Abstract

Multi-pose face frontalization can alleviate the influence of pose variance on face analysis. The traditional method of synthesizing a frontal face image directly from a multi-pose face image presents a problem in missing face details. To overcome this problem, we propose a face frontalization method based on the encoder-decoder network, namely multitask convolutional encoder-decoder network (MCEDN). The MCEDN introduces a frontal raw feature network to synthesize the global raw features of the frontal face. Then, the network utilizes the decoder to synthesize a clearer frontal face image by fusing local features extracted by the encoder and global raw features. We use a multitask learning mechanism to build an end-to-end model. The method then integrates three modules, namely local feature extraction, global raw feature synthesis, and frontal image synthesis. The model performance was improved by sharing parameters. In comparison with existing methods, MCEDN can synthesize frontal face images with a stable structure and rich details on multiple datasets. Then, we use the synthesized frontal images for face recognition and face expression recognition, and the state-of-the-art results demonstrate that the MCEDN preserves a number of face details.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call