Abstract

The capability of differentiating between various emotional states in speech displays a crucial prerequisite for successful social interactions. The aim of the present study was to investigate neural processes underlying this differentiating ability by applying a simultaneous neuroscientific approach in order to gain both electrophysiological (via electroencephalography, EEG) and vascular (via functional near-infrared-spectroscopy, fNIRS) responses. Pseudowords conforming to angry, happy, and neutral prosody were presented acoustically to participants using a passive listening paradigm in order to capture implicit mechanisms of emotional prosody processing. Event-related brain potentials (ERPs) revealed a larger P200 and an increased late positive potential (LPP) for happy prosody as well as larger negativities for angry and neutral prosody compared to happy prosody around 500 ms. FNIRS results showed increased activations for angry prosody at right fronto-temporal areas. Correlation between negativity in the EEG and activation in fNIRS for angry prosody suggests analogous underlying processes resembling a negativity bias. Overall, results indicate that mechanisms of emotional and phonological encoding (P200), emotional evaluation (increased negativities) as well as emotional arousal and relevance (LPP) are present during implicit processing of emotional prosody.

Highlights

  • The capability of differentiating between various emotional states in speech displays a crucial prerequisite for successful social interactions

  • Authors suggest that emotional information is fed from the superior temporal cortex (STC) to the inferior frontal cortex (IFC) for a first cognitive evaluation of the emotional content while information decoded by the amygdala implicitly without attentional focus is sent to the medial frontal cortex (MFC) where additional emotional appraisal, evaluation and regulation processes take place

  • Results of the present study indicate that positive and negative emotions can be discriminated from each other as well as from neutral prosody on a neural level even when the speech input does not provide any semantic content and no explicit discrimination task is given

Read more

Summary

Introduction

The capability of differentiating between various emotional states in speech displays a crucial prerequisite for successful social interactions. The aim of the present study was to investigate neural processes underlying this differentiating ability by applying a simultaneous neuroscientific approach in order to gain both electrophysiological (via electroencephalography, EEG) and vascular (via functional near-infrared-spectroscopy, fNIRS) responses. Electrophysiological and vascular responses were assessed simultaneously by electroencephalography (EEG), the analysis of event-related brain potentials (ERPs), and functional near-infrared spectroscopy (fNIRS) Both methods have proven to be very beneficial for the investigation of acoustic stimuli, as they are both soundless, do not interfere with each other, and provide an ecologically valid setting[10]. The VH suggests that negative emotions are accompanied by more right-sided anterior activity, while pleasant emotions are associated with more left-sided anterior activity[28,29,30] Support for both RHH and VH was found by several studies (please refer to[31] for an overview), an overall right-hemispheric dominance for emotional (prosody) processing has again been proposed in recent reviews[32,33]. Authors suggest that emotional information is fed from the STC to the IFC for a first cognitive evaluation of the emotional content while information decoded by the amygdala implicitly without attentional focus is sent to the MFC where additional emotional appraisal, evaluation and regulation processes take place

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call