Abstract

We study the problem of facial expression animation from a still image according to a driving video. This is a challenging task as expression motions are non-rigid and very subtle to be captured. Existing methods mostly fail to model these subtle expression motions, leading to the lack of details in their animation results. In this paper, we propose a novel facial expression animation method based on generative adversarial learning. To capture the subtle expression motions, Landmark guided Residual Module (LRM) is proposed to model detailed facial expression features. Specifically, residual learning is conducted at both coarse and fine levels conditioned on facial landmark heatmaps and landmark points respectively. Furthermore, we employ a consistency discriminator to ensure the temporal consistency of the generated video sequence. In addition, a novel metric named Emotion Consistency Metric is proposed to evaluate the consistency of facial expressions in the generated sequences with those in the driving videos. Experiments on MUG-Face, Oulu-CASIA and CAER datasets show that the proposed method can generate arbitrary expression motions on the source still image effectively, which are more photo-realistic and consistent with the driving video compared with results of state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.