Abstract

In order to delineate brain regions specifically involved in the processing of affective components of spoken language (affective or emotive prosody), we conducted two event-related potential experiments. Cortical activation patterns were assessed by recordings of direct current components of the EEG signal from the scalp. Right-handed subjects discriminated pairs of declarative sentences with either happy, sad or neutral intonation. Each stimulus pair was derived from two identical original utterances that, due to digital signal manipulations, slightly differed in fundamental frequency (F0) range or in duration of stressed syllables. In the first experiment, subjects were asked: (i) to denote the original emotional category of each sentence pair and (ii) to decide which of the two items displayed stronger emotional expressiveness. Participants in the second experiment were asked to repeat the utterances using inner speech during stimulus presentation in addition to the discrimination task. In the absence of inner speech, a predominant activation of right frontal regions was observed, irrespective of emotional category. In the second experiment, a bilateral activation with left frontal preponderance emerged from discrimination during additional performance of inner speech. Compared with the first experiment, a new pattern of acoustic signal processing arose. A relative decrease of brain activity during processing of F0 stimulus variants was observed together with increased activation during discrimination of duration-manipulated sentence pairs. Analysis of behavioural data revealed no significant differences in evaluation of expressiveness between the two experiments. We conclude that the topographical shift of cortical activity originates from left hemisphere (LH) mechanisms of speech processing that centre around the subvocal rehearsal system as an articulatory control component of the phonological loop. A strong coupling of acoustic input and (planned) verbal output channel in the LH is initiated by subvocal articulatory activity like inner speech. These neural networks may provide interpretations of verbal acoustic signals in terms of motor programs and facilitate continuous control of speech output by comparing the signal produced with that intended. Most likely, information on motor aspects of suprasegmental signal characteristics contributes to the evaluation of affective components of spoken language. In consequence, the right hemisphere (RH) holds a merely relative dominance, both for processing of F0 and for evaluation of emotional significance of sensory input. Psychophysically, an important determinant on expression of lateralization patterns seems to be given by the degree of communicative demands such as solely perceptive (RH) or perceptive and verbal-expressive (RH and LH).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.