Abstract

Numerous studies have explored the benefit of iconic gestures in speech comprehension. However, only few studies have investigated how visual attention was allocated to these gestures in the context of clear versus degraded speech and the way information is extracted for enhancing comprehension. This study aimed to explore the effect of iconic gestures on comprehension and whether fixating the gesture is required for information extraction. Four types of gestures (i.e., semantically and syntactically incongruent iconic gestures, meaningless configurations, and congruent iconic gestures) were presented in a sentence context in three different listening conditions (i.e., clear, partly degraded or fully degraded speech). Using eye tracking technology, participants’ gaze was recorded, while they watched video clips after which they were invited to answer simple comprehension questions. Results first showed that different types of gestures differently attract attention and that the more speech was degraded, the less participants would pay attention to gestures. Furthermore, semantically incongruent gestures appeared to particularly impair comprehension although not being fixated while congruent gestures appeared to improve comprehension despite also not being fixated. These results suggest that covert attention is sufficient to convey information that will be processed by the listener.

Highlights

  • In daily conversational situations, our senses are continuously exposed to numerous types of information, not all of which are processed

  • Paired t tests were conducted between the head and hand Areas of interest (AOI) to determine which zone was most and longer fixated

  • Paired t tests were conducted to investigate how visual allocation to the hand AOI would vary with speech degradation

Read more

Summary

Introduction

Our senses are continuously exposed to numerous types of information, not all of which are processed. According to Kelly et al (2004), these gestures could create a visuospatial context that would affect the subsequent processing of the message Research in this field refers to the combination of gestural and verbal information to create a unified meaning as “gesture-speech integration” (Holle and Gunter, 2007). Drijvers and Özyürek (2017) observed a joint contribution of iconic gestures and visible speech (i.e., lip movements) to comprehension in a speech degraded context. According to these authors, the semantic information conveyed through iconic gestures adds to the phonological information present in visible speech.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call