Abstract

Facial expressions are a crucial but challenging aspect of animating in-game characters. They provide vital nonverbal communication cues, but given the high complexity and variability of human faces, the task of capturing the natural diversity and affective complexity of human faces can be a labour-intensive process for animators. This motivates the need for more accurate, realistic and lightweight methods for generating emotional expressions for in-game characters. In this work, we introduce FlexComb, a Facial Landmark-based Expression Combination model, designed to generate a real-time space of realistic facial expression combinations. FlexComb leverages the highly varied CelebV-HQ dataset containing emotions in the wild, and a transformer-based architecture. The central component of the FlexComb system is an emotion recognition model that is trained on the facial dataset, and used to generate a larger dataset of tagged faces. The resulting system generates in-game facial expressions by sampling from this tagged dataset, including expressions that combine emotions in specified amounts. This allows in-game characters to take on variety of realistic facial expressions for a single emotion, which addresses this primary challenge of facial emotion modeling. FlexComb shows potential for expressive facial emotion simulation with applications that include animation, video game development, virtual reality, and human-computer interaction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.