Abstract

The design of an analog VLSI neural network processor for scientific and engineering applications such as pattern recognition and image compression is described. The backpropagation and self-organization learning schemes in artificial neural networks require high-precision multiplication and summation. The analog neural network design presented performs high-speed feedforward computation in parallel. A digital signal processor or a host computer can be used for updating of synapse weights during the learning phase. The analog computing blocks consist of a synapse matrix and the input and output neuron arrays. The output neuron is composed of a current-to-voltage converter and a sigmoid function generator with a controllable voltage gain. An improved Gilbert multiplier is used for the synapse design. The input and output neurons are tailored to reduce the network settling time and minimize the silicon area that is used for implementation.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.