Abstract

This paper proposes a scalable, high-performance and cost-effective digital neuromorphic hardware architecture for a single-layer SNN model, with fast and accurate on-chip learning capability. To obtain high recognition accuracies while reducing resource cost and processing latency, our neuromorphic architecture combines three brain-inspired factors: the SNN framework, the neuromorphic self-organizing map (SOM) learning (i.e., the unsupervised SOM-STDP learning rule), and the biological reinforcement learning (i.e., the reward-modulated STDP or R-STDP learning rule). Our hardware architecture mainly consists of a parallel and scalable 2D array of sub-network tiles, each computing 8×8 spiking neurons. On the hardware, a single-layer SNN is first trained by the SOM-STDP rule, and then fine-tuned by the R-STDP rule to improve the recognition accuracy. It realized a high performance of 448 frames/s during learning and 1818 fps during inference on the MNIST image dataset, at a 200 MHz clock frequency and 0.95 W power consumption. It attained high accuracies of 95.24% and 81.4% on the MNIST and ETH-80 datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call