Abstract

For Internet of Things (IoT) edge devices, it is very attractive to have the local sensemaking capability instead of sending all the data back to the cloud for information processing. For image pattern recognition, neuro-inspired machine learning algorithms have demonstrated enormous powerfulness. To effectively implement learning algorithms on-chip for IoT edge devices, on-chip synaptic memory architectures have been proposed to implement the key operations such as weighted-sum or matrix-vector multiplication. In this paper, we proposed a low-power design of static random access memory (SRAM) synaptic array for implementing a low-precision ternary neural network. We experimentally demonstrated that the supply voltage (VDD) of the SRAM array could be aggressively reduced to a level, where the SRAM cell is susceptible to bit failures. The testing results from 65-nm SRAM chips indicate that VDD could be reduced from the nominal 1–0.55 V (or 0.5 V) with a bit error rate ~0.23% (or ~1.56%), which only introduced ~0.08% (or ~1.68%) degradation in the classification accuracy. As a result, the power consumption could be reduced by more than $8\times $ (or $10\times $ ).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.