Abstract
Darwin (1872) postulated that emotional expressions contain universals that are retained across species. We recently showed that human rating responses were strongly affected by a listener's familiarity with vocalization types, whereas evidence for universal cross-taxa emotion recognition was limited. To disentangle the impact of evolutionarily retained mechanisms (phylogeny) and experience-driven cognitive processes (familiarity), we compared the temporal unfolding of event-related potentials (ERPs) in response to agonistic and affiliative vocalizations expressed by humans and three animal species. Using an auditory oddball novelty paradigm, ERPs were recorded in response to task-irrelevant novel sounds, comprising vocalizations varying in their degree of phylogenetic relationship and familiarity to humans. Vocalizations were recorded in affiliative and agonistic contexts. Offline, participants rated the vocalizations for valence, arousal, and familiarity. Correlation analyses revealed a significant correlation between a posteriorly distributed early negativity and arousal ratings. More specifically, a contextual category effect of this negativity was observed for human infant and chimpanzee vocalizations but absent for other species vocalizations. Further, a significant correlation between the later and more posteriorly P3a and P3b responses and familiarity ratings indicates a link between familiarity and attentional processing. A contextual category effect of the P3b was observed for the less familiar chimpanzee and tree shrew vocalizations. Taken together, these findings suggest that early negative ERP responses to agonistic and affiliative vocalizations may be influenced by evolutionary retained mechanisms, whereas the later orienting of attention (positive ERPs) may mainly be modulated by the prior experience.
Highlights
The recognition of emotions conveyed in the human voice plays an important role in human social interactions
In mammals relatively low frequency and broadband sounds are associated with aggressive contextual behavior, whereas high frequency sounds with a tonal structure are associated with fearful or friendly contextual behavior
The value was highest in the second block (2.5; SD = 3.06), which contained the highest number of targets (56), and lowest in the last block (1.07; SD = 0.69), which contained the lowest number of targets (43), showing that participants were well able to follow and keep up with the counting task from the beginning to the end of the experiment
Summary
The recognition of emotions conveyed in the human voice plays an important role in human social interactions. Thereby, the encoding of acoustically conveyed emotion in animal vocalizations show similarities with prosodic cues in human vocalizations and speech (e.g., Vettin and Todt, 2005; Hammerschmidt and Jürgens, 2007; Davila Ross et al, 2009; Zimmermann et al, 2013) These results are further supported by playback studies on cross-taxa recognition. In most of these studies human participants only listened to one species, either a phylogenetically closely related species (primates) or a somewhat familiar species (domesticated species e.g., dog, cats) It remains unclear whether recognizing emotional vocalizations across species can be explained by cross-taxa universal coding and processing mechanisms as a result of phylogeny or by familiarity alone
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.