Abstract

Comprehension of information conveyed by the tone of voice is highly important for successful social interactions (Grandjean et al., 2006). Based on lesion data, a superiority of the right hemisphere for cerebral processing of speech prosody has been assumed. According to an early neuroanatomical model, prosodic information is encoded within distinct right-sided perisylvian regions which are organized in complete analogy to the left-sided language areas (Ross, 1981). While the majority of lesion studies are in line with the assumption that the right temporal cortex is highly important for the comprehension of speech melody (Adolphs et al., 2001; Borod et al., 2002; Heilman et al., 1984), some studies indicate a widespread network of partially bilateral cerebral regions to contribute to prosody processing including the frontal cortex (Adolphs et al., 2002; Hornak et al., 2003; Rolls, 1999) and the basal ganglia (Cancellieve & Kertesz, 1990; Pell & Leonard, 2003). More recently, functional imaging experiments have helped to differentiate specific functions of distinct brain areas contributing to recognition of speech prosody (Ackermann et al., 2004; Schirmer & Kotz, 2006; Wildgruber et al., 2006). Observations in healthy subjects indicate a strong association of cerebral responses and acoustic voice properties in some regions (stimulus-driven effects), whereas other areas show modulation of activation linked to the focusing of attention to specific task components (task-dependent effects). Here we present a refined model of prosody processing and cross-modal integration of emotional signals from face and voice which differentiates successive steps of cerebral processing involving auditory analysis and multimodal integration of communicative signals within the temporal cortex and evaluative judgements within the frontal lobes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call