Abstract

Background: Emotional information in communicative messages is conveyed through a combination of linguistic and paralinguistic cues. Listeners must simultaneously process affective prosody, facial expressions, body gestures, and semantic information in order to determine the speaker's emotional intent. Findings from recent research support the involvement of subcortical structures in unimodal emotion recognition; however, given the propensity for emotional information to be conveyed across multiple modalities simultaneously, clinical advances require an understanding of how subcortical and cortical structures are involved in the recognition of multi-modal stimuli. Aims: To determine if individuals with and without a history of brain damage demonstrate affective processing biases on ambiguous and congruous multi-modal emotive stimuli.1 Methods & Procedures: Twenty individuals with brain damage and five individuals without a history of brain injury (NC) were included in the study. Participants with brain damage were grouped by depth and location of lesion site (left cortical [LC], right cortical [RC], left subcortical-cortical [LS], and right subcortical-cortical [RS]) to determine the effects of damage location on emotion processing. Participants identified emotions from stimuli containing incongruent affective information across two tasks using combinations of verbal, prosodic, and visual (i.e. facial expression) information. In Task 1, affectively charged sentences were presented with conflicting paralinguistic information (speech prosody, facial expression, and a combined modality condition). Task 2 involved the presentation of conflicting facial expression and prosodic information in linguistically neutral sentences. Response preferences for specific modalities were examined to understand potential processing biases. Outcomes & Results: The NC and LC groups demonstrated a bias for selecting emotions displayed by paralinguistic information in all Task 1 conditions. The RC group exhibited a similar preference when facial expression was included in the stimulus, but a decreased bias for selecting paralinguistic information when speech prosody was presented alone. In contrast to the NC and cortical damage groups, the RS group consistently selected emotions based on linguistic cues. The LS group exhibited the greatest inter-subject variability. Group patterns suggested no clear preference for either linguistic or paralinguistic information when speech prosody was included in the stimulus. In the task involving conflicting facial expression and linguistic information, the LS group exhibited a bias for facial expression information. All groups demonstrated a preference for facial expression information over speech prosody; however, the RS group's preference was significantly weaker. Conclusions: The data suggest that combined cortical and subcortical damage disrupts typical processing strategies for emotion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call