Abstract

There has been little research on the acoustic correlates of emotional expression in the singing voice. In this study, two pertinent questions are addressed: How does a singer's emotional interpretation of a musical piece affect acoustic parameters in the sung vocalizations? Are these patterns specific enough to allow statistical discrimination of the intended expressive targets? Eight professional opera singers were asked to sing the musical scale upwards and downwards (using meaningless content) to express different emotions, as if on stage. The studio recordings were acoustically analyzed with a standard set of parameters. The results show robust vocal signatures for the emotions studied. Overall, there is a major contrast between sadness and tenderness on the one hand, and anger, joy, and pride on the other. This is based on low vs high levels on the components of loudness, vocal dynamics, high perturbation variation, and a tendency for high low-frequency energy. This pattern can be explained by the high power and arousal characteristics of the emotions with high levels on these components. A multiple discriminant analysis yields classification accuracy greatly exceeding chance level, confirming the reliability of the acoustic patterns.

Highlights

  • Because of the important evolutionary function of emotions, a premium is placed on expressing them as reliably as possible and on observers inferring their meaning as accurately as possible

  • We first examined the effects of the singers’ emotion targets on the acoustic parameters by using a multivariate analysis of variance (MANOVA) over the complete selected variable set described above to obtain an indication of the extent to which the six chosen emotions produced significant differences in these variables

  • We examined the effects of a singer’s interpretation of different types of emotion on a number of central acoustic parameters and the capacity of these parameters to allow statistical discrimination of the intended expressive targets

Read more

Summary

Introduction

Because of the important evolutionary function of emotions, a premium is placed on expressing them as reliably as possible and on observers inferring their meaning as accurately as possible. Much research has investigated the role of the face in this process, work on the voice lags behind in elucidating the effects of different emotions on the vocal mechanisms, the acoustic cues generated, and the nature of the processes that allows listeners to recognize affective states. Starting with Darwin’s (1872) monumental monograph on the expression of emotion in man and animals, the study of how different emotions are expressed in the face, voice, and body and how well the underlying affective states can be recognized by conspecific observers has been, and still is, a central domain of emotion research. A recent review of the literature (Scherer et al, 2011) has shown that the results of 135 studies provide an overwhelming amount of evidence for the human capacity to infer a person’s emotion from his/her nonverbal expression with a degree of accuracy that by far exceeds chance expectations. An important aspect of this evidence is that accuracy rates

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call