Abstract

Spiking neural networks (SNNs) have attracted widespread attention due to their unique bio-interpretability and low-power properties, but the non-differentiability of discrete spike sequences fired by spiking neurons brings difficulties to the learning of SNNs. Recently, surrogate gradient (SG) and back-propagation through time (BPTT) have provided an excellent idea for training SNNs constructed by leaky integrate-and-fire (LIF) neuron models. The LIF neuron model has been widely used in previous SNNs due to its simplicity and low computational cost, however, this also limits its simulation of biological neuron dynamics, reducing the biological interpretability of SNNs. In this paper, we generalize SG and BPTT to SNNs constructed by spike response model (SRM) and propose the BP-SRM algorithm. Specifically, we address why BPTT succeeded in LIF neurons-based SNNs but failed in SRM neurons-based SNNs in previous researches. Then we establish an iterative form of the SRM neuron model by selecting different state variables. Based on iterable SRM, we get the spatiotemporal dependencies between the state variables of SRM neurons-based SNNs, which allows us to derive the gradient and update weights in SNN by BP-SRM. Then we design temporal channel normalization for BP-SRM and verify the performance of the SNNs on static image dataset, dynamic image dataset and engineering dataset, including Fashion-MNIST, Weather Dataset, N-MNIST, American Sign Language Dataset and bearing fault diagnosis dataset. The experiment results indicate that BP-SRM achieves the state-of-the-art performance of SNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call