Abstract

ABSTRACT Multi-modal discourse comprehension requires speakers to combine information from speech and gestures. To date, little research has addressed the cognitive resources that underlie these processes. Here we used a dual-task paradigm to test the relative importance of verbal and visuospatial working memory in speech-gesture comprehension. Healthy, college-aged participants encoded either a series of digits (verbal load) or a series of dot locations in a grid (visuospatial load) and rehearsed them (secondary memory task) as they performed a (primary) multi-modal discourse comprehension task. Regardless of the secondary task, performance on the discourse comprehension task was better when the speaker’s gestures and speech were congruent than when they were incongruent. However, the congruity advantage was smaller when the concurrent memory task involved a visuospatial load than when it involved a verbal load. Results suggest that taxing the visuospatial working memory system reduced participants’ ability to benefit from the information in congruent iconic gestures. A control experiment demonstrated that results were not an artifact of the difficulty of the visuospatial load task. Overall, these data suggest speakers recruit visuospatial working memory to interpret gestures about concrete visual scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.