Abstract

A pixel for measuring two-dimensional (2-D) visual motion with two one-dimensional (1-D) detectors has been implemented in very large scale integration. Based on the spatiotemporal feature extraction model of Adelson and Bergen, the pixel is realized using a general-purpose analog neural computer and a silicon retina. Because the neural computer only offers sum-and-threshold neurons, the Adelson and Bergen's model is modified. The quadratic nonlinearity is replaced with a full-wave rectification, while the contrast normalization is replaced with edge detection and thresholding. Motion is extracted in two dimensions by using two 1-D detectors with spatial smoothing orthogonal to the direction of motion. Analysis shows that our pixel, although it has some limitations, has much lower hardware complexity compared to the full 2-D model. It also produces more accurate results and has a reduced aperture problem compared to the two 1-D model with no smoothing. Real-time velocity is represented as a distribution of activity of the 18 X and 18 Y velocity-tuned neural filters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.