Abstract

Error backpropagation is the most common approach for direct training of spiking neural networks. However, the non-differentiability of spiking neurons makes the backpropagation of error a challenge. In this paper, we introduce a new temporal learning algorithm, STiDi-BP, in which we ignore backward recursive gradient computation, and to avoid the non-differentiability of SNNs, we use a linear approximation to compute the derivative of latency with respect to the potential. We apply gradient descent to each layer independently based on an estimation of the temporal error in that layer. To do so, we calculate the desired firing time of each neuron and compare it to its actual firing time. In STiDi-BP, we employ the time-to-first-spike temporal coding, one spike per neuron, and use spiking neuron models with piecewise linear postsynaptic potential which provide large computational benefits. To evaluate the performance of the proposed learning rule, we run three experiments on the XOR problem, the face/motorbike categories of the Caltech 101 dataset, and the MNIST dataset. Experimental results show that the STiDi-BP outperforms traditional BP in terms of accuracy and/or computational cost.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.