Abstract

Spiking neural network (SNN) as the third-generation artificial neural network, has higher computational efficiency, lower resource overhead and higher biological rationality. It shows greater potential applications in audio and image processing. With the traditional method, the adder is used to add the membrane potential, which has low efficiency, high resource overhead and low level of integration. In this work, we propose a spiking neural network inference accelerator with higher integration and computational efficiency. Resistive random access memory (RRAM or memristor) is an emerging storage technology, in which resistance varies with voltage. It can be used to build a crossbar architecture to simulate matrix computing, and it has been widely used in processing in memory (PIM), neural network computing, and other fields. In this work, we design a weight storage matrix and peripheral circuit to simulate the leaky integrate and fire (LIF) neuron based on the memristor array. And we propose an SNN hardware inference accelerator, which integrates 24k neurons and 192M synapses with 0.75k memristor. We deploy a three-layer fully connected network on the accelerator and use it to execute the inference task of the MNIST dataset. The result shows that the accelerator can achieve 148.2 frames/s and 96.4% accuracy at a frequency of 50 MHz.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call