Abstract

BackgroundThe computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features.Methodology/Principal FindingsRecent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem.Conclusions/SignificanceWe propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour.

Highlights

  • Motion is an important feature of the visual input as it plays a key role for a subject interacting with his or her environment

  • Recent investigations by Pack and Born revealed that MT neurons do not suffer from the aperture problem, in contrast to the neurons in area V1 [7]

  • These authors found that area MT neurons can compute the global motion direction for larger stimuli, e.g., for the barberpole stimulus, again in contrast to responses measured in area V1 [8]

Read more

Summary

Introduction

Motion is an important feature of the visual input as it plays a key role for a subject interacting with his or her environment. Motion processing in the visual cortex has been a topic of intense investigation for several decades It is still an open question how localized measurements of spatiotemporal changes are integrated and disambiguated, in particular in the case of stimuli provoking non-unique neural responses. The computation of coherent object motion which may differ from the locally measurable component motion is apparent for plaid stimuli Recent investigations by Pack and Born revealed that MT neurons do not suffer from the aperture problem, in contrast to the neurons in area V1 [7] These authors found that area MT neurons can compute the global motion direction for larger stimuli, e.g., for the barberpole stimulus, again in contrast to responses measured in area V1 [8]. Selectionist models focus on the motion computation at positions with 2D features

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.