Abstract

Spiking neural network (SNN) has attracted extensive attention in large-scale image processing tasks. To obtain higher computing efficiency, the development of hardware architecture suitable for SNN computing has become a hot research topic. However, the existing hardware of spike neurons still has high computational complexity and they do not perform well enough on complicated datasets, and the neuromorphic system cannot support SNNs with different convolutional topologies, resulting in low efficiency of the system. To address the above problems, an optimized leaky integrated-and-fire (LIF) neuron called EPC-LIF and a neuromorphic hardware acceleration system (ELIF-NHAS) are designed and implemented based on the field-programmable gate array (Xilinx Kintex-7). First, the classical LIF neuron is designed using the optimization method of extended prediction correction (EPC), which can reduce the computation complexity and hardware resources with a maximum frequency of 439.95 MHz. The ELIF-NHAS is constructed and optimized with parallel and pipeline techniques for effectively running SNNs, working with a maximum frequency of 135.6 MHz. Then, the genetic algorithm is applied to adjust the membrane threshold of neurons for further improving the accuracy of SNNs. Furthermore, the ELIF-NHAS can support different SNNs with multilayer perceptron and convolutional neural network topologies (called SCNN), including traditional, depth-separate, and residual convolutions. The accuracy of multilayer SCNNs can achieve 99.10%, 90.29%, and 82.15% on MNIST, Fashion-MNIST, and SVHN datasets, respectively; and the speed and energy consumption achieve 1.21 ms/image and 1.19 mJ/image. Compared with existing systems, the ELIF-NHAS is more suitable for the deployment and inference of SNNs with higher speed and lower consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call