Abstract

The role of nonverbal communication in patients with post-stroke language impairment (aphasia) is not yet fully understood. This study investigated how aphasic patients perceive and produce co-speech gestures during face-to-face interaction, and whether distinct brain lesions would predict the frequency of spontaneous co-speech gesturing. For this purpose, we recorded samples of conversations in patients with aphasia and healthy participants. Gesture perception was assessed by means of a head-mounted eye-tracking system, and the produced co-speech gestures were coded according to a linguistic classification system. The main results are that meaning-laden gestures (e.g., iconic gestures representing object shapes) are more likely to attract visual attention than meaningless hand movements, and that patients with aphasia are more likely to fixate co-speech gestures overall than healthy participants. This implies that patients with aphasia may benefit from the multimodal information provided by co-speech gestures. On the level of co-speech gesture production, we found that patients with damage to the anterior part of the arcuate fasciculus showed a higher frequency of meaning-laden gestures. This area lies in close vicinity to the premotor cortex and is considered to be important for speech production. This may suggest that the use of meaning-laden gestures depends on the integrity of patients’ speech production abilities.

Highlights

  • Co-speech gestures are omnipresent during face-to-face interaction

  • In line with the results obtained for the variable gesture fixation, a generalized linear mixed models (GLMM) on the variable change in gaze direction showed a significant impact of the factors gesture category (z = − 4.684, p < 0.001) and group (z = −1.728, p = 0.044)

  • Meaning-laden gestures led to more changes in gaze direction than abstract gestures, and patients with aphasia were more likely to change their direction of gaze during co-speech gestures than healthy participants (Figure 3)

Read more

Summary

Introduction

Co-speech gestures occur when the interaction partner is not visually present, e.g., when people are talking on the phone. This implies that co-speech gestures do convey communicative meaning (McNeill, 1992), and support speech production by facilitating lexical retrieval (Rauscher et al, 1996; Krauss and Hadar, 1999). Brain areas that are typically activated during language perception respond when people perceive gestures (Andric and Small, 2012). This implies that co-speech gesture perception and language processing rely on shared neural networks (Xu et al., 2009). Only few studies addressed gesture perception in aphasic patients (Records, 1994; Preisig et al, 2015; Eggenberger et al., 2016)

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.