Abstract

AbstractIn recent years, the applications of digital humans have become increasingly widespread. One of the most challenging core technologies is the generation of highly realistic and automated 3D facial animation that combines facial movements and speech. The single-modal 3D facial animation driven by speech typically ignores the weak correlation between speech and upper facial movements as well as head posture. In contrast, the video-driven approach can perfectly solve the posture problem while obtaining natural expressions. However, mapping 2D facial information to 3D facial information may lead to information loss, which make lip synchronization generated by video-driven methods is not as good as the speech-driven methods trained on 4D facial data. Therefore, this paper proposes a dual-modal generation method that uses speech and video information to generate more natural and vivid 3D facial animation. Specifically, the lip movements related to speech are generated by speech-video information, while speech-uncorrelated postures and expressions are generated solely by video information. The speech-driven module is used to extract speech features, and its output lip animation is then used as the foundation for facial animation. The expression and pose module is used to extract temporal visual features for regressing expression and head posture parameters. We fuse speech and video features to obtain chin posture parameters related to lip movements, and use these parameters to fine-tune the lip animation generated form the speech-driven module. This paper introduces multiple consistency losses to enhance the network’s capability to generate expressions and postures. Experiments conducted on the LRS3, TCD-TIMIT and MEAD datasets show that the proposed method achieves better performance on evaluation metrics such as CER, WER, VER and VWER than the current state-of-the-art methods. In addition, a perceptual user study show that over 77% and 70% of cases believe that this paper’s method is better than the comparative algorithms EMOCA and SPECTRE in terms of realism. In terms of lip synchronization, it received over 79% and 66% of cases support, respectively. Both evaluation methods demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call