Abstract

Internet-of-things applications that use machine-learning algorithms have increased the demand for application-specific energy-efficient hardware that can perform both learning and inference tasks to adapt to endpoint users or environmental changes. This paper presents a multilayer-learning neuromorphic system with analog-based multiplier-accumulator (MAC), which can learn training data by stochastic gradient descent algorithm. As a component of the proposed system, a current-mode MAC processor, fabricated in 28-nm CMOS technology, performs both forward and backward processing in a crossbar structure of 500×500 6-b transposable SRAM arrays. The proposed system is verified in a two-layer neural network by using two prototype chips and an FPGA. Without any calibration circuit for the analog-based MAC, the proposed system compensates for non-idealities from analog operations by learning training data with the analog-based MAC. With 1-b (+1, 0, -1) batch update of 6-b synaptic weights, the proposed system achieves a recognition rate of 96.6% with a peak energy efficiency of 2.99 TOPS/W (1 OP = one unsigned 8-b×signed 6-b MAC operation) in the classification of the MNIST dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.