Abstract
In real-life applications of multilayer neural networks, the scale of integration, processing speed, and manufacturability are of key importance. A simple analog-signal synapse model is implemented on a standard 0.35 /spl mu/m CMOS process requiring no floating-gate capability. A neural-matrix of 2176 analog current-mode synapses arranged in eight layers of 16 neurons with 16 inputs each is constructed for the purpose of a fingerprint feature extraction application. Synapse weights are stored on the analog storage capacitors, and synapse nonlinearity with respect to weight is investigated. The capability of the synapse to operate in feedforward and learning modes is studied and demonstrated. The effect of the synapse's inherent quadratic nonlinearity on learning convergence and on the optimization of vector direction is analyzed. Transistor-level analog simulations verify the hardware circuit. System-level MatLab simulations verify the synapse mathematical model. The conclusion reached is that the proposed implementation is very suitable for large-scale artificial neural networks - especially if on-chip integration with other products on a standard CMOS process is required.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.