3D Facial animations, crucial to augmented and mixed reality digital media, have evolved from mere aesthetic elements to potent storytelling media. Despite considerable progress in facial animation of neutral emotions, existing methods still struggle to capture the authenticity of emotions. This paper introduces a novel approach to capture fine facial expressions and generate facial animations using audio synchronization. Our method consists of two key components: First, the Local-to-global Latent Diffusion Model (LG-LDM) tailored for authentic facial expressions, which can integrate audio, time step, facial expressions, and other conditions towards possible encoding of emotionally rich yet latent features in response to possibly noisy raw audio signals. The core of LG-LDM is our carefully designed Facial Denoiser Model (FDM) for aligning the local-to-global animation feature with audio. Second, we redesign an Emotion-centric Vector Quantized-Variational AutoEncoder framework (EVQ-VAE) to finely decode the subtle differences under different emotions and reconstruct the final 3D facial geometry. Our work significantly contributes to the key challenges of emotionally realistic 3D facial animation for audio synchronization and enhances the immersive experience and emotional depth in augmented and mixed reality applications. We provide a reproducibility kit including our code, dataset, and detailed instructions for running the experiments. This kit is available at https://github.com/wangxuanx/Face-Diffusion-Model.
Read full abstract