Abstract

The ability to accurately perceive emotions is crucial for effective social interaction. Many questions remain regarding how different sources of emotional cues in speech (e.g., prosody, semantic information) are processed during emotional communication. Using a cross-modal emotional priming paradigm (Facial affect decision task), we compared the relative contributions of processing utterances with single-channel (prosody-only) versus multi-channel (prosody and semantic) cues on the perception of happy, sad, and angry emotional expressions. Our data show that emotional speech cues produce robust congruency effects on decisions about an emotionally related face target, although no processing advantage occurred when prime stimuli contained multi-channel as opposed to single-channel speech cues. Our data suggest that utterances with prosodic cues alone and utterances with combined prosody and semantic cues both activate knowledge that leads to emotional congruency (priming) effects, but that the convergence of these two information sources does not always heighten access to this knowledge during emotional speech processing.

Highlights

  • In pop culture, it is often said that it matters less ‘‘what’’ is said than ‘‘how’’ one says it

  • Is there truth to this adage? How important are the words compared to the tone in the perception of emotions in speech, and does this processing differ depending on the emotion expressed? Of special interest here, how does emotional processing change over the course of a spoken utterance that contains vocal cues about emotion and an emerging semantic context for interpreting the speaker’s emotion state? Is emotion recognition bolstered in some way at the intersection of prosody and semantic cues during emotional speech processing? The current study seeks to address these questions and provide insight on how emotion is implicitly recognized, distinguished, and processed according to the availability of particular cues in speech

  • Many questions remain regarding the manner in which different emotional cues are processed and integrated to infer a speaker’s emotional state as humans converse; in particular, it is poorly understood how prosodic cues, which are omnipresent in speech, are modulated by an emerging semantic context conveying emotion as spoken language unfolds and is assigned meaning

Read more

Summary

Introduction

It is often said that it matters less ‘‘what’’ is said than ‘‘how’’ one says it. The present study examined the extent of crossmodal priming (emotional congruency effects) between an utterance prime and an emotional face target in two conditions within the same utterance: a ‘‘pre-semantic’’ condition, where semantic information about emotions was insufficient and only prosodic information signaled discrete emotional meanings; and a ‘‘post-semantic’’ condition, which indexed the point in the utterance where both prosody and semantic cues unambiguously marked the speaker’s emotional meaning This design allowed us to compare any difference in the magnitude of priming associated with the presence of combined channels of emotional speech information in a more sensitive manner than previous undertakings. We expected emotion-specific differences in how emotional face targets are processed, such as a processing advantage for happy faces, as has been reported in previous experiments [21,27,32,37]

Methods
Results
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.