Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., “emblems”). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low – at or slightly above the fixation baseline – when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.
Read full abstract