Abstract

Voice-induced cross-taxa emotional recognition is the ability to understand the emotional state of another species based on its voice. In the past, induced affective states, experience-dependent higher cognitive processes or cross-taxa universal acoustic coding and processing mechanisms have been discussed to underlie this ability in humans. The present study sets out to distinguish the influence of familiarity and phylogeny on voice-induced cross-taxa emotional perception in humans. For the first time, two perspectives are taken into account: the self- (i.e. emotional valence induced in the listener) versus the others-perspective (i.e. correct recognition of the emotional valence of the recording context). Twenty-eight male participants listened to 192 vocalizations of four different species (human infant, dog, chimpanzee and tree shrew). Stimuli were recorded either in an agonistic (negative emotional valence) or affiliative (positive emotional valence) context. Participants rated the emotional valence of the stimuli adopting self- and others-perspective by using a 5-point version of the Self-Assessment Manikin (SAM). Familiarity was assessed based on subjective rating, objective labelling of the respective stimuli and interaction time with the respective species. Participants reliably recognized the emotional valence of human voices, whereas the results for animal voices were mixed. The correct classification of animal voices depended on the listener's familiarity with the species and the call type/recording context, whereas there was less influence of induced emotional states and phylogeny. Our results provide first evidence that explicit voice-induced cross-taxa emotional recognition in humans is shaped more by experience-dependent cognitive mechanisms than by induced affective states or cross-taxa universal acoustic coding and processing mechanisms.

Highlights

  • The recognition of affective information in human voice plays an important role in human social interaction and is linked to human empathy, which refers to the capacity to perceive, understand and respond to the unique affective state of another person (e.g., [1,2])

  • Based on the objective familiarity rating a two-factorial repeated measurement ANOVA revealing significant main effects of context (F = 55.89, df = 1, N = 28, p,0.001) and species (F = 383.96, df = 1.66, N = 28, p,0.001) and a significant interaction between both (F = 98.51, df = 1.1, N = 28, p,0.001; Figure 2a). This indicates that the context had different effects on the objective familiarity rating depending on the species

  • Influence of familiarity and self-perspective on cross-taxa emotional recognition We found a significant positive correlation across the playback categories between the emotional correct assignment index (ECI) and the species recognition index and between the ECI and the interaction time, i.e. time spent with the respective species (r = 0.820, N = 8, p = 0.013), emphasizing the link between familiarity and cross-taxa emotional recognition.)

Read more

Summary

Introduction

The recognition of affective information in human voice plays an important role in human social interaction and is linked to human empathy, which refers to the capacity to perceive, understand and respond to the unique affective state of another person (e.g., [1,2]). Human speech and human non-linguistic vocalizations convey emotional states in the form of prosodic cues (e.g.[3,4,5,6]) Based on these prosodic cues humans are able to recognize the emotional state of other humans (e.g., [7,8,9,10]). Crosscultural studies demonstrated that humans with different linguistic backgrounds exhibit many similarities in terms of how they express and identify emotions in human voices and music (e.g., [11,12,13,14,15]) This may suggest that affective prosodic components in humans are predominantly organized by innate mechanisms and may have derived from a pre-human origin [6]

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.