Abstract

Tracking emotional responses as they unfold has been one of the hallmarks of applied neuroscience and related disciplines, but recent studies suggest that automatic tracking of facial expressions have low validation. In this study, we focused on the direct measurement of facial muscles involved in expressions such as smiling. We used single-channel surface electromyography (sEMG) to evaluate the muscular activity from the Zygomaticus Major face muscle while participants watched music videos. Participants were then tasked with rating each video with regard to their thoughts and responses to each of them, including their judgment of emotional tone (“Valence”), personal preference (“Liking”) and rating of whether the video displayed strength and impression (“Dominance”). Using a minimal recording setup, we employed three ways to characterize muscular activity associated with spontaneous smiles. The total time spent smiling (ZygoNum), the average duration of smiles (ZygoLen), and instances of high valence (ZygoTrace). Our results demonstrate that Valence was the emotional dimension that was most related to the Zygomaticus activity. Here, the ZygoNum had higher discriminatory power than ZygoLen for Valence quantification. An additional investigation using fractal properties of sEMG time series confirmed previous studies of the Facial Action Coding System (FACS) documenting a smoother contraction of facial muscles for enjoyment smiles. Further analysis using ZygoTrace responses over time to the video events discerned “high valence” stimuli with a 76% accuracy. Additional validation of this approach came against previous findings on valence detection using features derived from a single channel EEG setup. We discuss these results in light of both the recent replication problems of facial expression measures, and in relation to the need for methods to reliably assess emotional responses in more challenging conditions, such as Virtual Reality, in which facial expressions are often covered by the equipment used.

Highlights

  • Humans display emotional responses in a variety of ways, including changes in facial expressions, skin conductance, heartbeat, brain signals, body temperature, and pulse rate

  • We focused on the time course of the zygomaticus major, a paired facial muscle of the cheek area that lifts the angle of the mouth upwards and laterally to allow a person to smile

  • Dominance usually shows lower variance in subjective scores compared to Valence probably because it is more difficult to estimate through surveys: Dominance requires the estimation of an abstract concept like the degree of empowering sensation a musical video could elicit

Read more

Summary

Introduction

Humans display emotional responses in a variety of ways, including changes in facial expressions, skin conductance, heartbeat, brain signals, body temperature, and pulse rate. These measurable body changes are the foundation of Affective Computing, a discipline that studies how to detect emotions and their effect on cognition, perception, learning, communication, and decision-making (Picard, 2003). Simple, yet highly intricate, the dynamics of facial expressions depend upon a complex architecture of musculature surrounding the calvaria region, orbital opening, mouth, and the nose (Bentsianov and Blitzer, 2004). Among these groups of muscles, the most important one for facial expressions is in the oral area (Cohen, 2006)

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call