Facial expression analysis is a critical component in numerous applications, ranging from human- computer interaction to digital character animation. Despite the availability of extensive datasets, most focus predominantly on basic emotions, limiting the expressiveness and applicability of models trained on them. This article introduces a novel approach to generating compound emotional expressions by leveraging the Emotion Wheel, a principle that captures the complex interrelations between basic emotions. Our method integrates the EMOCA (Emotion Driven Monocular Face Capture and Animation) [1] framework, which enhances 3D facial reconstruction by incorporating emotion recognition, stacking models [2, 54] with a sophisticated expression blending algorithm to synthesize nuanced 2D and 3D facial animations. Utilizing the VKIST dataset, which includes high-resolution facial images of Vietnamese individuals, we build a comprehensive database of emotion parameters. Through principal component analysis (PCA) and correlation-driven blending, our approach not only enhances the realism of generated facial expressions but also preserves the subtle nuances that are characteristic of human emotions. Experimental results demonstrate that our method consistently outperforms traditional linear interpolation techniques, producing more distinct and recognizable blended expressions. A user study further validates the naturalness and quality of the generated expressions, with average ratings indicating a strong preference for the proposed method over existing approaches. These findings suggest significant potential for improving emotional expressiveness in both static and dynamic digital character representations.
Read full abstract