Abstract
In this paper, we propose a spike-time based unsupervised learning method using spiking-timing dependent plasticity (STDP). A simplified linear STDP learning rule is proposed for the energy efficient weight updates. To reduce unnecessary computations for the input spike values, a stop mechanism of the forward pass is introduced in the forward pass. In addition, a hardware-friendly input quantization scheme is used to reduce the computational complexities in both the encoding phase and the forward pass. We construct a two-layer fully-connected spiking neuron network (SNN) based on the proposed method. Compared to general rate-based SNNs trained by STDP, the proposed method reduces the complexity of network architecture (an extra inhibitory layer is not needed) and the computations of synaptic weight updates. According to the fixed-point simulation with 9-bit synaptic weights, the proposed SNN with 6144 excitatory neurons achieves 96% of recognition accuracy on MNIST dataset without any supervision. An SNN processor that contains 384 excitatory neurons with on-chip learning capability is designed and implemented with 28 nm CMOS technology based on the proposed low complexity methods. The SNN processor achieves an accuracy of 93% on MNIST dataset. The implementation results show that the SNN processor achieves a throughput of 277.78k FPS with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.50~\mu \text{J}$ </tex-math></inline-formula> /inference energy consuming in inference mode, and a throughput of 211.77k FPS with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.66~\mu \text{J}$ </tex-math></inline-formula> /learning energy consuming in learning mode.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems I: Regular Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.