Abstract

Talking face generation is to synthesize a lip-synchronized talking face video by inputting an arbitrary face image and corresponding audio clips. The current talking face model can be divided into four parts: visual feature extraction, audio feature processing, multimodal feature fusion, and rendering module. For the visual feature extraction part, existing methods face the challenge of complex learning task with noisy features, this article introduces an attention-based disentanglement module to disentangle the face into Audio-face and Identity-face using speech-related facial action unit (AU) information. For the multimodal feature fusion part, existing methods ignore not only the interaction and relationship of cross-modal information but also the local driving information of the mouth muscles. This study proposes a novel generative framework that incorporates a dilated non-causal temporal convolutional self-attention network as a multimodal fusion module to enhance the learning of cross-modal features. The proposed method employs both audio- and speech-related facial AUs as driving information. Speech-related AU information can facilitate more accurate mouth movements. Given the high correlation between speech and speech-related AUs, we propose an audio-to-AU module to predict speech-related AU information. Finally, we present a diffusion model for the synthesis of talking face images. We verify the effectiveness of the proposed model on the GRID and TCD-TIMIT datasets. An ablation study is also conducted to verify the contribution of each component. The results of quantitative and qualitative experiments demonstrate that our method outperforms existing methods in terms of both image quality and lip-sync accuracy. Code is available at https://mftfg-au.github.io/Multimodal_Fusion/.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.