Abstract

Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance.

Highlights

  • Facial expression modelling of virtual characters presents many difficulties, including time constraints, cost and complexity

  • Ekman and Friesen published what they called Facial Action Coding System (FACS) [7], which has been used as a standard to categorize the facial expression of emotions

  • We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters

Read more

Summary

Introduction

Facial expression modelling of virtual characters presents many difficulties, including time constraints, cost and complexity. In order to learn the animation curves to create realistic expressions that involve laugh and smile, we employ different facial expression data sets with the types of laughter and smiles defined in the previous taxonomy. For the expression open-mouth smile, recorded by the seven subjects of our experiment (Fig 7), in Table 3 are shown the alignment values for the smileFrow controller This process is extended to all controllers and all the laugh and smile facial expressions in order to learn the representative curves. Newly generated expressions take into account the different performances included in the data set to generalize the virtual character animation curves for the synthesis of new laugh and smile facial expressions. This fact, is demonstrated in the work of Ruch [6], which describes how the cycle basically depends on the lung volume, which obviously depends on the subject

Results and discussion
Limitations
Conclusions and future work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call