Abstract

A simple linear neural network modelled on areas MT and MST of primate visual cortex can determine the direction of self-motion of an observer by using the optical flow field induced by observer translation relative to a rigid planar environment. The model's input layer consists of a set of motion detectors covering a 20° × 20° portion of the visual field with a subset of eight detectors selective to four primary directions and two speeds representing the optical motion within a single receptive field. Heading is represented distributively on the output layer in terms of azimuth and elevation. The network's heading accuracy under ideal conditions is on the order of 1° of visual angle, which is in agreement with perceptual studies of heading accuracy in human observers. The network's performance under noisy optical flow conditions matches remarkably well that of human subjects. Moreover, the network's tolerance of noise makes it potentially useful in robotic vision. A subsequent problem is to extend the model to combined observer translation and rotation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.