Abstract

One source of information we glean from everyday experience, which guides social interaction, is assessing the emotional state of others. Emotional state can be expressed through several modalities: body posture or movements, body odor, touch, facial expression, or the intonation in a voice. Much research has examined emotional processing within one sensory modality or the transfer of emotional processing from one modality to another. Yet, less is known regarding interactions across different modalities when perceiving emotions, despite our common experience of seeing emotion in a face while hearing the corresponding emotion in a voice. Our study examined if visual and auditory emotions of matched valence (congruent) conferred stronger perceptual and physiological effects compared to visual and auditory emotions of unmatched valence (incongruent). We quantified how exposure to emotional faces and/or voices altered perception using psychophysics and how it altered a physiological proxy for stress or arousal using salivary cortisol. While we found no significant advantage of congruent over incongruent emotions, we found that changes in cortisol were associated with perceptual changes. Following exposure to negative emotional content, larger decreases in cortisol, indicative of less stress, correlated with more positive perceptual after-effects, indicative of stronger biases to see neutral faces as happier.

Highlights

  • IntroductionEmotions are expressed and perceived in many different sensory domains, multimodally [1], with emotional information conveyed via faces, voices, odors, touch, and body posture or movement [2,3,4,5]

  • Emotions are expressed and perceived in many different sensory domains, multimodally [1], with emotional information conveyed via faces, voices, odors, touch, and body posture or movement [2,3,4,5].Our ability to infer the emotional state of others, identify the potential threat they pose, and act is crucial to social interaction

  • point of subjective equality (PSE) shift for Ac, Ai, and Av compared to alone alone (Aa)

Read more

Summary

Introduction

Emotions are expressed and perceived in many different sensory domains, multimodally [1], with emotional information conveyed via faces, voices, odors, touch, and body posture or movement [2,3,4,5]. Our ability to infer the emotional state of others, identify the potential threat they pose, and act is crucial to social interaction. Many studies have examined emotional processing in a given sensory domain, yet, few have considered faces and voices together, a more common experience, which can take advantage of multimodal processes that may allow for more optimal information processing. From very early on we can make use of emotional information from multiple sources [6].

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call