Abstract

Recently, spiking neural networks have gained attention owing to their energy efficiency. All-to-all spike-time dependent plasticity is a popular learning algorithm for spiking neural networks because it is suitable for nondifferentiable spike event-based learning and requires fewer computations than back-propagation-based algorithms. However, the hardware implementation of all-to-all spike-time dependent plasticity is limited by the large storage area required for spike history and large energy consumption caused by frequent memory access. We propose a time-step scaled spike-time dependent plasticity to reduce the storage area required for spike history by reducing the area of the spike-time dependent plasticity learning circuit by 60% and a post-neuron spike-referred spike-time dependent plasticity to reduce the energy consumption by 99.1% by efficiently accessing the memory while learning. The accuracy of Modified National Institute of Standards and Technology image classification degraded by less than 2% when both time-step scaled spike-time dependent plasticity and post-neuron spike-referred spike-time dependent plasticity were applied. Thus, the proposed hardware-friendly spike-time dependent plasticity algorithms make all-to-all spike-time dependent plasticity implementable in more compact areas while reducing energy consumption and experiencing insignificant accuracy degradation.

Highlights

  • Artificial intelligence (AI) algorithms have developed rapidly in the last decade

  • We propose time-step scaled Spike-time dependent plasticity (STDP) (TSSTDP), which reduces the area for storing spike history by quantizing several time steps and post-neuron spike-referred STDP (PR-STDP), which reduces the number of memory accesses by using a post-neuron spike as a trigger for the learning process to save energy

  • The time information of the pre- and post-neuron spikes within the learning window stored in the spike history is referred to when the other post- or preneuron spikes trigger the long-term potentiation (LTP) or long-term depression (LTD) process, respectively

Read more

Summary

INTRODUCTION

Artificial intelligence (AI) algorithms have developed rapidly in the last decade. As the reliability of these algorithms increases, many applications such as the Internet of Things [1] [2], smart factories [3][4], and smart mobility have been presented. The time information of the pre- and post-neuron spikes within the learning window stored in the spike history is referred to when the other post- or preneuron spikes trigger the LTP or LTD process, respectively. The past contribution of the synapse with a higher firing rate of the pre-neuron spike is ignored when the time information is referred for the synaptic weight update. The proposed all-to-all PR- and TS-STDP algorithms reduce the area of the STDP circuit and the power consumption while maintaining the performance of the original all-to-all STDP algorithm

HARDWARE IMPLEMENTATION OF THE PROPOSED STDP
SIMULATION RESULT
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call