Abstract

Two eye-tracking experiments investigated how and when pointing gestures and location descriptions affect target identification. The experiments investigated the effect of gestures and referring expressions on the time course of fixations to the target, using videos of human gestures and human voice, and animated gestures and synthesized speech. Ambiguous, yet informative pointing gestures elicited attention and facilitated target identification, akin to verbal location descriptions. Moreover, target identification was superior when both pointing gestures and verbal location descriptions were used. These findings suggest that gesture not only operates as a context to verbal descriptions, or that verbal descriptions operate as a context to gesture, but that they complement one another in reference resolution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.