Abstract

Due to its inherent neural delays, the visual system has an outdated access to sensory information about the current position of moving objects. In contrast, living organisms are remarkably able to track and intercept moving objects under a large range of challenging environmental conditions. Physiological, behavioral and psychophysical evidences strongly suggest that position coding is extrapolated using an explicit and reliable representation of object’s motion but it is still unclear how these two representations interact. For instance, the so-called flash-lag effect supports the idea of a differential processing of position between moving and static objects. Although elucidating such mechanisms is crucial in our understanding of the dynamics of visual processing, a theory is still missing to explain the different facets of this visual illusion. Here, we reconsider several of the key aspects of the flash-lag effect in order to explore the role of motion upon neural coding of objects’ position. First, we formalize the problem using a Bayesian modeling framework which includes a graded representation of the degree of belief about visual motion. We introduce a motion-based prediction model as a candidate explanation for the perception of coherent motion. By including the knowledge of a fixed delay, we can model the dynamics of sensory information integration by extrapolating the information acquired at previous instants in time. Next, we simulate the optimal estimation of object position with and without delay compensation and compared it with human perception under a broad range of different psychophysical conditions. Our computational study suggests that the explicit, probabilistic representation of velocity information is crucial in explaining position coding, and therefore the flash-lag effect. We discuss these theoretical results in light of the putative corrective mechanisms that can be used to cancel out the detrimental effects of neural delays and illuminate the more general question of the dynamical representation at the present time of spatial information in the visual pathways.

Highlights

  • Neural delays and motion-induced position shiftsThough it is barely noticeable in everyday life, visual signals captured on the retina take a significant amount of time before they can elicit even the simplest actions such as eye movements

  • We tested our model with the different instances of the flash-lag effect (FLE) conditions and manipulated the parameters of the static and moving stimuli in order to explore the advantages of motion-based position coding with respect to previous models

  • The diagonal motion-based prediction (dMBP) model was implemented with a particle filter method which has been previously detailed in [16]

Read more

Summary

Introduction

Though it is barely noticeable in everyday life, visual signals captured on the retina take a significant amount of time before they can elicit even the simplest actions such as eye movements. Studying neural delays and how they may be compensated is a critical question that needs to be resolved in order to decipher how basic neural computations such as the dynamical processing of sensory information can be efficiently performed (for a review, see [8]) Solving this enigma would have several theoretical consequences such as, in particular, understanding how neural activity can encode both space and time [9]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call