Abstract

FACIAL EXPRESSIONS ARE USED IN MUSIC PERFORMANCE to communicate structural and emotional intentions. Exposure to emotional facial expressions also may lead to subtle facial movements that mirror those expressions. Seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing. Four different participants were recorded using facial electromyography (EMG) while performing the same task. Participants saw and heard recordings of musical phrases sung with happy, sad, and neutral emotional connotations. They then imitated the target stimulus, paying close attention to the emotion expressed. Facial expressions were monitored during four epochs: (a) during the target; (b) prior to their imitation; (c) during their imitation; and (d) after their imitation. Expressive activity was observed in all epochs, implicating a role of facial expressions in the perception, planning, production, and post-production of emotional singing.

Highlights

  • FACIAL EXPRESSIONS AND EMOTIONAL SINGING: A STUDY OF PERCEPTION AND PRODUCTION WITH MOTION CAPTURE AND ELECTROMYOGRAPHY

  • We focused on the analysis of motion capture data from two markers: the middle of the left eyebrow (BROW) and the left lip corner (LLC)

  • Our focus on the left side of the face was motivated by evidence that facial movements are of greater magnitude on the left side because the right hemisphere is dominant for facial expressions (Sackeim, Gur, & Saucy, 1978)

Read more

Summary

Introduction

FACIAL EXPRESSIONS AND EMOTIONAL SINGING: A STUDY OF PERCEPTION AND PRODUCTION WITH MOTION CAPTURE AND ELECTROMYOGRAPHY. When musicians are about to sing an emotional passage, advanced planning of body and facial movements may facilitate accurate performance and optimize expressive communication. When musicians complete an emotional passage, the bodily movements and facial expressions that were used during production may linger in a post-production phase, allowing expressive communication to persist beyond the acoustic signal, and thereby giving greater impact and weight to the music. Perceivers spontaneously mimic facial expressions (Bush, Barr, McHugo, & Lanzetta, 1989; Dimberg, 1982; Dimberg & Lundquist, 1988; Hess & Blairy, 2001; Wallbott, 1991), even when facial stimuli are presented subliminally (Dimberg, Thunberg, & Elmehed, 2000) They tend to mimic tone of voice and pronunciation (Goldinger, 1998; Neumann & Strack, 2000), gestures and body posture (Chartrand & Bargh, 1999), and breathing rates (McFarland, 2001; Paccalin & Jeannerod, 2000). When an individual perceives a music performance, this process of facial mimicry may function to facilitate rapid and accurate decoding of music structure and emotional information by highlighting relevant visual and kinaesthetic cues (Stel & van Knippenberg, 2008)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.