Compared to artificial neural networks (ANNs), spiking neural networks (SNNs) present a more biologically plausible model of neural system dynamics. They rely on sparse binary spikes to communicate information and operate in an asynchronous, event-driven manner. Despite the high heterogeneity of the neural system at the neuronal level, most current SNNs employ the widely used leaky integrate-and-fire (LIF) neuron model, which assumes uniform membrane-related parameters throughout the entire network. This approach hampers the expressiveness of spiking neurons and restricts the diversity of neural dynamics. In this paper, we propose replacing the resistor in the LIF model with a discrete memristor to obtain the heterogeneous memristive LIF (MLIF) model. The memristance of the discrete memristor is determined by the voltage and flux at its terminals, leading to dynamic changes in the membrane time parameter of the MLIF model. SNNs composed of MLIF neurons can not only learn synaptic weights but also adaptively change membrane time parameters according to the membrane potential of the neuron, enhancing the learning ability and expression of SNNs. Furthermore, since the proper threshold of spiking neurons can improve the information capacity of SNNs, a learnable straight-through estimator (LSTE) is proposed. The LSTE, based on the straight-through estimator (STE) surrogate function, features a learnable threshold that facilitates the backward propagation of gradients through neurons firing spikes. Extensive experiments on several popular static and neuromorphic benchmark datasets demonstrate the effectiveness of the proposed MLIF and LSTE, especially on the DVS-CIFAR10 dataset, where we achieved the top-1 accuracy of 84.40 .
Read full abstract