Abstract

Deep learning (DL) has gained unprecedented success in many real-world applications. However, DL poses difficulties for efficient hardware implementation due to the needs of a complex gradient-based learning algorithm and the required high memory bandwidth for synaptic weight storage, especially in today's data-intensive environment. Computing-in-memory (CIM) strategies have emerged as an alternative for realizing energy-efficient neuromorphic applications in silicon, reducing resources and energy required for neural computations. In this work, we exploit a CIM-based spatial-temporal hybrid neural network (STHNN) with a unique learning algorithm. To be specific, we integrate both multilayer perceptron and recurrent-based delay-dynamical system, making the network becomes linear separable while processing information in both spatial and temporal domains, better yet, reducing the memory bandwidth and hardware overhead through the CIM architecture. The prototype fabricated in 180 nm CMOS process is built of fully-analog components, yielding an average on-chip classification accuracy up to 86.9% on handprinted alphabet characters with a power consumption of 33 mW. Beyond that, through the handwritten digit database and the radio frequency fingerprinting dataset, software-based numerical evaluations offer 1.6 -to- 9.8 × and 1.9 -to- 4.4 × speedup, respectively, without significantly degrading its classification accuracy compared to the cutting-edge DL approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.