Abstract

Layer-by-layer error back-propagation (BP) in deep spiking neural networks (SNN) involves complex operations and a high latency. To overcome these problems, we propose a method to efficiently and rapidly train deep SNNs, by extending the well-known single-layer Tempotron learning rule to multiple SNN layers under the Direct Feedback Alignment framework that directly projects output errors onto each hidden layer via a fixed random feedback matrix. A trace-based optimization for Tempotron learning is also proposed. Using such two techniques, our learning process becomes spatiotemporally local and is very plausible for neuromorphic hardware implementations. We applied the proposed hardware-friendly method in training multi-layer and deep SNNs, and obtained comparably high recognition accuracies on the MNIST and ETH-80 datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call