Abstract

The coordination of attention between individuals is a fundamental part of everyday human social interaction. Previous work has focused on the role of gaze information for guiding responses during joint attention episodes. However, in many contexts, hand gestures such as pointing provide another valuable source of information about the locus of attention. The current study developed a novel virtual reality paradigm to investigate the extent to which initiator gaze information is used by responders to guide joint attention responses in the presence of more visually salient and spatially precise pointing gestures. Dyads were instructed to use pointing gestures to complete a cooperative joint attention task in a virtual environment. Eye and hand tracking enabled real-time interaction and provided objective measures of gaze and pointing behaviours. Initiators displayed gaze behaviours that were spatially congruent with the subsequent pointing gestures. Responders overtly attended to the initiator’s gaze during the joint attention episode. However, both these initiator and responder behaviours were highly variable across individuals. Critically, when responders did overtly attend to their partner’s face, their saccadic reaction times were faster when the initiator’s gaze was also congruent with the pointing gesture, and thus predictive of the joint attention location. These results indicate that humans attend to and process gaze information to facilitate joint attention responsivity, even in contexts where gaze information is implicit to the task and joint attention is explicitly cued by more spatially precise and visually salient pointing gestures.

Highlights

  • In their recent theoretical and empirical work, Yu and S­ mith[23,24] highlight how joint attention can be achieved using multiple non-verbal behaviours including gaze and pointing gestures

  • To better characterize the mechanisms of joint attention, we must consider how multiple communicative gestures are used together to achieve social coordination. This requires three key empirical blind spots to be addressed, including: (i) whether adults display useful gaze information when explicitly initiating joint attention via other gestures; (ii) the extent to which adults attend to the face of others before responding to joint attention bids; and (iii) whether relevant spatial information conveyed by the gaze of an initiator influences the efficiency with which a responder can achieve joint attention with their partner

  • Our findings reveal that initiator gaze does contain predictive spatial information that can be used to guide the attention of responders

Read more

Summary

Introduction

In their recent theoretical and empirical work, Yu and S­ mith[23,24] highlight how joint attention can be achieved using multiple non-verbal behaviours including gaze and pointing gestures. To better characterize the mechanisms of joint attention, we must consider how multiple communicative gestures are used together to achieve social coordination This requires three key empirical blind spots to be addressed, including: (i) whether adults display useful gaze information when explicitly initiating joint attention via other gestures (e.g., hand pointing); (ii) the extent to which adults attend to the face of others before responding to joint attention bids; and (iii) whether relevant spatial information conveyed by the gaze of an initiator influences the efficiency with which a responder can achieve joint attention with their partner. It allows for eye movement data to be automatically and objectively segmented and analysed across dynamic areas of interest (e.g., the face of each avatar as participants move during and across trials) whilst being temporally-aligned to body movement data Using this approach, we first wanted to investigate whether there was predictive information in the spatial pattern of an initiator’s gaze shifts before initiating joint attention via pointing. The target location before pointing to it, thereby providing predictive spatial information about the upcoming joint attention bid

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.