Abstract

Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region.

Highlights

  • The ability to extract emotional salience from visual and/or auditory signals has important implications for effective functioning in social environment

  • Results showed that the extraction of emotional cues from multimodal input was associated with differential event-related potentials (ERPs) activity at frontal-central sites, and was indexed by P200, N250, and P300

  • (2) Given the results of the current study in conjunction with the results reported in the previous studies, we concluded that the processes indexed by N100, P200, N250, and P300 components do not seem to be modality specific but rather operate on sensory percepts and their higher order representations regardless of modality

Read more

Summary

Introduction

The ability to extract emotional salience from visual and/or auditory signals has important implications for effective functioning in social environment. Due to their emotional significance, emotional stimuli are thought to capture attention automatically when compared with neutral stimuli, both in visual and auditory modalities [1,2,3,4,5]. Most of the studies examined emotion processing from unisensory stimuli, such as voice (prosody processing) [3,5,6] and faces [1,4,7]. There were some studies that examined the simultaneous processing of emotional visual and auditory signals [8,9]. The use of event related potential (ERP) methodology for the study of emotional processing during face and/or voice is advantageous since it affords tracking neurocognitive processes as they happen in real time from the millisecond the stimulus is presented and provides a window of inquiry into these processes before a response is made

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.