Abstract

Spiking Neural Networks (SNNs), inspired by the biological brain, attract extensive attention for their simplified computation and the ability to process spatiotemporal data. With the development of retina-like sensors, there is a growing demand for edge SNN accelerators. The structural dataflow and great spiking sparsity of SNNs provide much exploration space for energy-efficient accelerators. However, current SNN accelerators did not co-optimize the two features together. This work proposes an energy-efficient edge accelerator architecture to support typical spiking neural networks. We propose a Weight Stationary-Local Output Stationary (WS-LOS) dataflow for SNNs and maximize the data reuse with a hierarchical memory structure. Three methods, including address skipping (AS), dynamic workload scheduling (DWS), and reconfigurable adder tree (RAT), are proposed to exploit the spiking sparsity. The accelerator is synthesized under 65nm technology, running at 250MHz. We prove our accelerator’s ability to process spatiotemporal data based on the DVS-Gesture dataset, achieving 92.7% accuracy and 24.3μJ/image energy. This design achieves 349.6KFPS recognition throughput on the MNIST dataset and 0.52 μJ/image energy, which realizes top performance among edge SNN accelerators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call