Abstract

The primate retina performs nonlinear “image” data reduction while providing a compromise between high resolution where needed, a wide field-of-view, and small output image size. For autonomous robotics, this compromise is useful for developing vision systems with adequate response times. This paper reviews the two classes of models of retino–cortical data reduction used in hardware implementations. The first class reproduces the retina to cortex mapping based on conformal mapping functions. The pixel intensities are averaged in groups called receptive fields (RF's) which are nonoverlapping and the averaging performed is uniform. As is the case in the retina, the size of the RF's increases with distance from the center of the sensor. Implementations using this class of models are reported to run at video rates (30 frames per second). The second class of models reproduce, in addition to the variable-resolution retino–cortical mapping, the overlap feature of receptive fields of retinal ganglion cells. Achieving data reduction with this class of models is more computationally expensive due to the RF overlap. However, an implementation using such a model running at a minimum of 10 frames per second has been recently proposed. In addition to biological consistency, models with overlapping fields permit the simple selection of a variety of RF computational masks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.