Abstract

With the availability of big data and advanced hardware technologies, deep learning has been applied in different applications, such as self-driving cars and face recognition. While considering various sources of uncertainty and balancing multiple objectives of a system, the hardware implementation of deep learning needs continuous model updating and intensive synaptic weight storage, and SRAM is critical for the overall performance and energy efficiency. In this brief, we introduce offline data mining to the hardware design process, and the discovered data knowledge combined with a data-driven hardware design technique enables a more intelligent memory with better trade-off between energy efficiency, cost, and classification accuracy, thereby helping relieve the huge burden of data storage in deep learning systems. A 45 nm 64 kbits (256 words $\boldsymbol {\times } \,\, 256$ bits) synaptic SRAM is presented that enables 45.6% active power saving and 83.2% leakage power saving, with low implementation cost (3.17%) and less than 1% degradation in classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call