Abstract

Convolutional Neural Network (CNN) is one of the most important Deep Neural Networks (DNN) classes that helps solving many tasks related to image recognition and computer vision. Their classical implementations by using conventional CMOS technologies and digital design techniques are still considered very energy-consuming. Floating point CNN relies primarily on MAC (Multiply and ACcumulate) operation. Recently, cost-effective Bite-wise CNN based on XNOR and bit-counting operations have been considered as a possible hardware implementation candidate. However, the Von-Neumann bottleneck due to intensive data fetching between memory and the computing core limits their scalability on hardware. XNOR-BITCOUNT operations can be easily implemented by using In Memory Computing (IMC) paradigms executed on a memristive crossbar array. Among emerging memristive devices, the Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) offers the possibility to have a higher ON resistance that allows reducing the reading current, since all the crossbar array is read in parallel. This could contribute to a further reduction of energy consumption, paving the way for much bigger crossbar designs. This study presents a crossbar architecture based on SOT-MRAM with very low energy consumption; we study the impact of process variability on the synaptic weights and perform Monte-Carlo simulations of the overall crossbar array to evaluate the error rate. Simulation results show that this implementation has lower energy consumption with respect to other memristive solutions with 65.89 fJ per read operation. The design is also quite robust to process variations, with very low reading inaccuracies up to 10 %.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call