Abstract

Recent advances in quantized neural networks (QNNs) have paved the way for energy efficient hardware architectures for machine learning tasks. Binary and ternary QNNs are suitable for image classification and recognition applications on highly resource constrained hardware. Binary neural networks have low precision thus suffer a significant accuracy loss for dense networks and large datasets. This issue can be resolved through ternary neural networks (TNNs) with higher weight precision and better resource utilization. TNN implementation using conventional complementary metal oxide semiconductor and memristive devices show limited improvement in area and energy efficiency. Spintronics based magnetic random access memory (MRAM) devices are the most prominent choice amongst the various non-volatile memories for neural networks. This work presents the implementation of differential spin Hall effect (DSHE) MRAM-based two and three input ternary computation units (TCUs) for TNN. Furthermore, a multilayer perceptron architecture with synaptic crossbar array using the proposed TCU is implemented for Modified National Institute of Standards and Technology data classification. The results show that DSHE-based TCU is 30% more energy efficient as compared with spin-transfer torque (STT)-MRAM based design. Furthermore, DSHE-MRAM based TNN shows improvement in energy and area by 82% and 9%, respectively, when compared to STT-based TNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call