Abstract

In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

Highlights

  • Research in facial expression analysis and synthesis has mainly concentrated on primary or archetypal emotions

  • We describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978))

  • This trend may be due to the great influence of the works of Ekman and Friesen [2, 3] and Izard et al [4] who proposed that the archetypal emotions correspond to distinct facial expressions which are supposed to be universally recognizable across cultures

Read more

Summary

INTRODUCTION

Research in facial expression analysis and synthesis has mainly concentrated on primary or archetypal emotions. This is achieved by analyzing real images and video sequences as well as by animating synthesized examples This is achieved through combination, in the framework of a rule-based system, of the activation parameter—known from Whissel’s—with the description of the archetypal expressions by FAPs. Figure 1 illustrates the way the proposed scheme functions. The facial expression synthesis system operates either by utilizing FAP values estimated by an image analysis subsystem, or by rendering actual expressions recognized by a fuzzy rules system In the former case, protuberant facial points motion is analyzed and translated to FAP value variation, which in turn is rendered using the synthetic face model, so as to reproduce the expression in question. Should the results of the analysis coincide with the systems knowledge of the definition facial expression, the expression can be rendered using predefined FAP alteration tables These tables are computed using the known definition of archetypal emotions, fortified by video data of actual human expressions.

DESCRIPTION OF THE ARCHETYPAL EXPRESSIONS USING FAPS
THE RANGE OF VARIATION OF FAPS IN REAL VIDEO SEQUENCES
Modeling FAPs through FP’s movement
3.10 Right eye
Creating archetypal expression profiles
CREATING PROFILES FOR INTERMEDIATE EXPRESSIONS
Same universal emotion category
Emotions lying between archetypal ones
Evaluation
THE EMOTION ANALYSIS SUBSYSTEM
EXPERIMENTAL RESULTS
Creating profiles for emotions belonging to the same universal category
Creating profiles for emotions lying between the archetypal ones
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.