Abstract

An abundance of studies on emotional experiences in response to music have been published over the past decades, however, most have been carried out in controlled laboratory settings and rely on subjective reports. Facial expressions have been occasionally assessed but measured using intrusive methods such as facial electromyography (fEMG). The present study investigated emotional experiences of fifty participants in a live concert. Our aims were to explore whether automated face analysis could detect facial expressions of emotion in a group of people in an ecologically valid listening context, to determine whether emotions expressed by the music predicted specific facial expressions and examine whether facial expressions of emotion could be used to predict subjective ratings of pleasantness and activation. During the concert, participants were filmed and facial expressions were subsequently analyzed with automated face analysis software. Self-report on participants’ subjective experience of pleasantness and activation were collected after the concert for all pieces (two happy, two sad). Our results show that the pieces that expressed sadness resulted in more facial expressions of sadness (compared to happiness), whereas the pieces that expressed happiness resulted in more facial expressions of happiness (compared to sadness). Differences for other facial expression categories (anger, fear, surprise, disgust, and neutral) were not found. Independent of the musical piece or emotion expressed in the music facial expressions of happiness predicted ratings of subjectively felt pleasantness, whilst facial expressions of sadness and disgust predicted low and high ratings of subjectively felt activation, respectively. Together, our results show that non-invasive measurements of audience facial expressions in a naturalistic concert setting are indicative of emotions expressed by the music, and the subjective experiences of the audience members themselves.

Highlights

  • Our second aim was to investigate whether the emotion expressed in the music predicts specific facial expressions in the audience (Fig. 4)

  • The covariance structure with the best model fit was Compound Symmetry Heterogenous (CSH). This analysis showed that the emotion expressed in the music had a significant main effect on facial expressions of happiness and sadness

  • We found that automated face analysis software detected facial expressions that reflected the emotion expressed in the music, these findings were limited to a subset of the audience

Read more

Summary

Aims and objectives

The overall aim of this experiment was to investigate if automated face analysis software can measure emotional expressions of an audience in an ecologically valid classical concert environment. 2. Does the emotion expressed by the music predict specific audience facial expressions? 3. Can we use information from facial expressions to predict audience reports of music-induced pleasantness and activation?. Facial expressions were analyzed using automated face analysis software and compared with audience self-reports. Participants rated felt experiences on two dimensions rather than a number of emotion categories as participants had to provide ratings after the concert rather than after each piece. The measures of felt experience did not use the same words describing emotions as the FaceReader software and the way we have described the music pieces, they refer to the same underlying fundamental dimensions of emotion

Participants
Procedure
Results
Discussion
Limitations
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.