Abstract

The motion response properties of neurons increase in complexity as one moves from primary visual cortex (V1), up to higher cortical areas such as the middle temporal (MT) and the medial superior temporal area (MST). Many of the features of V1 neurons can now be replicated using computational models based on spatiotemporal filters. However until recently, relatively little was known about how the motion analysing properties of MT neurons could originate from the V1 neurons that provide their inputs. This has constrained the development of models of the MT–MST stages which have been linked to higher level motion processing tasks such as self-motion perception and depth estimation. I describe the construction of a motion sensor built up in stages from two spatiotemporal filters with properties based on V1 neurons. The resulting composite sensor is shown to have spatiotemporal frequency response profiles, speed and direction tuning responses that are comparable to MT neurons. The sensor is designed to work with digital images and can therefore be used as a realistic front-end to models of MT and MST neuron processing; it can be probed with the same two-dimensional motion stimuli used to test the neurons and has the potential to act as a building block for more complex models of motion processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call