Abstract

Emotion recognition is a key component of human social cognition and is considered vital for many domains of life. Studies measuring this ability have documented that performance accuracy in emotion recognition tasks is affected by various factors, ranging from gender, one’s own confidence, hormonal fluctuations, to the modality of stimulus presentation (i.e., audio, visual). The majority of work has focused on the recognition of facial expressions. The results from the small amount of studies that made comparisons across the modalities of vocal and facial emotion recognition are contradictive, suggesting a lack of reliability across studies. Therefore, the main aim of this research project was to investigate the impact of above-mentioned factors on individuals’ accuracy of performance while accounting for methodological shortcomings from previous research. Two independent but related studies were conducted. In Study 1, the first aim was to examine whether performance accuracy differs as a function of listeners’ and speakers’ gender. The second aim was to investigate the influence of vocal stimulus types and their related acoustic parameters on emotion recognition and confidence ratings. Additionally, it was explored whether the correct recognition of vocal emotions elicits confidence judgments. Study 2 was pre-registered and aimed to account for previous assumptions regarding males’ ‘poor’ emotion recognition ability by investigating whether the modality of stimulus presentation (i.e., audio, visual, audio-visual) and hormonal fluctuations (i.e., testosterone, cortisol and their interaction) impact their performance accuracy and response time in emotion recognition tasks. In both studies, participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. The results from Study 1 showed that speakers’ gender had a significant impact on how listeners’ judged emotions from the voice, yet, no robust differences were observed regarding the performance accuracy of recognizing emotions by listeners’ gender (manuscript 1). Additionally, the results obtained from this study replicate previous findings by showing that participants could recognize emotions based on differential acoustic patterning. They further add to previous research by demonstrating that emotional expressions are more accurately recognized and confidently judged from non-speech sounds than from emotionally inflected speech. Moreover, they showed that listeners who were better at recognizing vocal expressions of emotion were also more confident in their judgments (manuscript 2). The results from Study 2 indicated that emotion recognition accuracy and response time are greatly improved for the audio-visual presentation of emotional expressions. In addition, they showed that happy expressions are identified faster and with greater accuracy from faces than voices, while angry expressions are better recognized in voices compared to faces. Finally, the overall effect sizes of testosterone by cortisol interaction on emotion recognition accuracy and response time were small yet significant (manuscript 3). The combined findings from both studies explain inconsistencies in the existing literature by highlighting the importance of distinguishing between these factors when assessing emotion recognition ability. This research project actively contributes to a scientific domain that is currently re-writing our understanding on the role these factors play for the recognition of emotions. It hereby paves the way for impactful future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call