Abstract

To generate the corresponding talking face from a speech audio and a face image, it is essential to match the variations in the facial appearance with the speech audio in subtle movements of different face regions. Nevertheless, the facial movements generated by the existing methods lack detail and vividness, or the methods are only oriented toward a specific person. In this article, we propose a novel two-stage network to generate talking faces for any target identity through annotations of the action units (AUs). In the first stage, the relationship between the audio and the AUs in the audio-to-AU network is learned. The audio-to-AU network needs to produce the consistent AU group for the input audio. In the second stage, the AU group in the first stage and a face image are fed into the generation network to output the resulting talking face image. Various results confirm that, compared to state-of-the-art methods, our approach is able to produce more realistic and vivid talking faces for arbitrary targets with richer details of facial movements, such as the cheek motion and eyebrow motion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.