Abstract

In recent years, spiking neural networks (SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information. However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call