Abstract

There is hardly a more important sensory modality for humans and other mammals than vision. The first and best-known part of the visual system is the retina, which is not a mere photoreceptor or static camera but a sophisticated feature preprocessor with continuous input and several parallel output channels. These interacting channels represent the visual scene. Never before has it been known in neuroscience how these channels build up a "visual language". Our mammalian retina model can generate the elements of this visual language. In the present paper the design steps of the implementation of the multilayer CNN retinal model is shown. It is rare that an analogic CNN algorithm has such a sophisticated series of different complex dynamics, meanwhile it is feasible on a recently fabricated complex cell CNN-UM chip. The mammalian retina model is decomposed into a full-custom mixed-signal chip that embeds digitally programmable analog parallel processing and distributed image memory on a common silicon substrate. The chip was designed and manufactured in a standard 0.5 μm CMOS technology and contains approximately 500,000 transistors. It consists of 1024 processing units arranged into a 32×32 grid. The functional features of the chip are in accordance with the second-order complex cell CNN-UM architecture: two CNN layers with programmable inter- and intra-layer connections between cells as well as programmable layer time constants. The uniqueness of this approach, among others, lies in the reprogrammability, i.e. the openness to any new discovery, even after a possible retinal implementation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call