Abstract

Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.

Highlights

  • During saccadic eye movements, the image of the world shifts across our retina

  • This study examines how the brain distinguishes the image perturbations caused by saccades and those due to changes in the visual scene

  • We first show that participants made severe errors in judging the presaccadic location of an object that shifts during a saccade

Read more

Summary

Introduction

The image of the world shifts across our retina Despite these shifts, we perceive targets as having world-stable positions, and have no problem to act upon them whenever necessary. Vaziri et al [2] recently tested the hypothesis that the brain computes the position of a reach target after a saccade based on the optimal integration of predicted and actual sensory feedback. In their paradigm, participants first made a saccade after they briefly foveated a visual target in complete darkness. The authors further demonstrated that the uncertainty of the postsaccadic target position, which was modulated by varying its viewing time, affected its weight in the integration process

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call