Abstract

We present a design methodology for mapping neuraly inspired algorithms for vector quantization, into VLSI hardware. We describe the building blocks used: memory cells, current conveyors, and translinear circuits. We use the basic building blocks to design an associative processor for bit-pattern classification; a high-density memory based neuromorphic processor. Operating in parallel, the single chip system determines the closest match, based on the Hamming distance, between an input bit pattern and multiple stored bit templates; ties are broken arbitrarily. Energy efficient processing is achieved through a precision-on-demand architecture. Scalable storage and processing is achieved through a compact six transistor static RAM cell/ALU circuit. The single chip system is programmable for template sets of up to 124 bits per template and can store up to 116 templates (total storage capacity of 14 Kbits). An additional 604 bits of auxiliary storage is used for pipelining and fault tolerance re-configuration capability. A fully functional 6.8 mm by 6.9 mm chip has been fabricated in a standard single–poly, double–metal 2.0µmn–well CMOS process.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.