Abstract

A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.

Highlights

  • While the where and when of human gaze behaviour when steering has been investigated in a number of careful observational studies in highly naturalistic conditions – including field experiments in the wild using mobile eye tracking[3,10,13] – even the most accurate naturalistic observational techniques cannot resolve the mechanisms underpinning oculomotor and locomotor coordination

  • In Experiment 2 we investigated these gaze control processes further by only specifying the path using waypoints, and occasionally withholding this waypoint information

  • This rules out simple bottom-up visual transitions or salience as explanations of gaze control in this active steering task

Read more

Summary

Introduction

While the where and when of human gaze behaviour when steering has been investigated in a number of careful observational studies in highly naturalistic conditions – including field experiments in the wild using mobile eye tracking[3,10,13] (for reviews see14,15) – even the most accurate naturalistic observational techniques cannot resolve the mechanisms underpinning oculomotor and locomotor coordination. The alternative would be that locomotor behaviour is better accounted for by model-free online control mechanisms driven by optically available information, attuned to relevant environmental cues[25] (for a review of model-free locomotor control see[5]) Both approaches can account for anticipatory steering but do so in very different ways. Given the rich pattern of potential stimulus cues present in naturalistic conditions, it is not trivial to design an experiment where performance could not plausibly be explained by purely online processes The examination of this question effectively requires a scene where external visual cues can be removed, but at the same time allows the human to exhibit behaviours that are close to naturalistic behaviour, and would only be expected based on an internal model. A model-based approach to perception and control would potentially integrate the visual control of steering literature with the literature of gaze control as prediction[31] and the ‘predictive brain’ framework[30], there is at present little experimental work that would provide direct quantitative support for the development of such predictive-processing models

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call