Abstract

Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals.

Highlights

  • It is a challenging problem for a moving observer to correctly perceive the movement of a moving object because the observer’s self-motion influences the retinal motion of the object (Fig 1a)

  • Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion

  • Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal

Read more

Summary

Introduction

It is a challenging problem for a moving observer to correctly perceive the movement of a moving object because the observer’s self-motion influences the retinal motion of the object (Fig 1a). Human psychophysical studies [1, 2] have provided strong evidence that the visual system solves this problem by attempting to remove the retinal component of visual motion that is due to self-motion This suggests that the visual system transforms the retinal motion signal—the motion of the object in an observer-relative reference frame—into one fixed relative to the stationary world (i.e. world-relative). If the visual system incorrectly assesses depth, it will wrongly infer the proportion of retinal motion that arises due to selfmotion This would result in factoring out too much or too little of the retinal motion due the observer’s self-motion from the retinal object motion, potentially leading to errors in the perceived movement trajectory of the object

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call