Abstract
We describe and evaluate a model of motion perception based on the integration of information from two parallel pathways: a motion pathway and a luminance pathway. The motion pathway has two stages. The first stage measures and pools local motion across the input animation sequence and assigns reliability indices to these pooled measurements. The second stage groups locations on the basis of these measurements. In the luminance pathway, the input scene is segmented into regions on the basis of similarities in luminance. In a subsequent integration stage, motion and luminance segments are combined to obtain the final estimates of object motion. The neural network architecture we employ is based on LEGION (locally excitatory globally inhibitory oscillator networks), a scheme for feature binding and region labeling based on oscillatory correlation. Many aspects of the model are implemented at the neural network level, whereas others are implemented at a more abstract level. We apply this model to the computation of moving, uniformly illuminated, two-dimensional surfaces that are either opaque or transparent. Model performance replicates a number of distinctive features of human motion perception.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.