Abstract

Analog very large scale integration implementations of neural networks can compute using a fraction of the size and power required by their digital counterparts. However, intrinsic limitations of analog hardware, such as device mismatch, charge leakage, and noise, reduce the accuracy of analog arithmetic circuits, degrading the performance of large-scale adaptive systems. In this paper, we present a detailed mathematical analysis that relates different parameters of the hardware limitations to specific effects on the convergence properties of linear perceptrons trained with the least-mean-square (LMS) algorithm. Using this analysis, we derive design guidelines and introduce simple on-chip calibration techniques to improve the accuracy of analog neural networks with a small cost in die area and power dissipation. We validate our analysis by evaluating the performance of a mixed-signal complementary metal-oxide-semiconductor implementation of a 32-input perceptron trained with LMS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.