Abstract

A hardware-based spiking neural network (SNN) has attracted many researcher’s attention due to its energy-efficiency. When implementing the hardware-based SNN, offline training is most commonly used by which trained weights by a software-based artificial neural network (ANN) are transferred to synaptic devices. However, it is time-consuming to map all the synaptic weights as the scale of the neural network increases. In this paper, we propose a method for quantized weight transfer using spike-timing-dependent plasticity (STDP) for hardware-based SNN. STDP is an online learning algorithm for SNN, but we utilize it as the weight transfer method. Firstly, we train SNN using the Modified National Institute of Standards and Technology (MNIST) dataset and perform weight quantization. Next, the quantized weights are mapped to the synaptic devices using STDP, by which all the synaptic weights connected to a neuron are transferred simultaneously, reducing the number of pulse steps. The performance of the proposed method is confirmed, and it is demonstrated that there is little reduction in the accuracy at more than a certain level of quantization, but the number of pulse steps for weight transfer substantially decreased. In addition, the effect of the device variation is verified.

Highlights

  • Artificial neural network (ANN) has become a core technology that leads the modern artificial intelligence (AI) industry, it has been utilized in various fields such as image recognition, natural language processing, autonomous vehicles, and so on [1,2,3]

  • Given that the operation of ANN is based on vector-matrix multiplication (VMM), which is basically parallel calculation, the conventional computing system where central processing units (CPUs) and memory are connected in series is not suitable for ANN [4,5]

  • One synaptic weight was implemented by a pair of two synaptic pressed in the hardware-based spiking neural network (SNN)

Read more

Summary

Introduction

Artificial neural network (ANN) has become a core technology that leads the modern artificial intelligence (AI) industry, it has been utilized in various fields such as image recognition, natural language processing, autonomous vehicles, and so on [1,2,3]. Conventional I&F neuron circuits have been fabricated by the complementary metal-oxide-semiconductor process with a membrane capacitor [9,10]; in recent years, there have been some research about I&F neurons based on memristors such as resistive random-access memory (RRAM), phase-change random-access memory (PRAM), and magnetic random-access memory (MRAM), which have advantages in energy and area [11,12]. A synapse adjusts the strength of the connection between neurons, which is called a synaptic weight. Conductance of the synaptic device changes according to its program (PGM) and erase (ERS) states, that the strength of the signal transmitted to the neuron (the weight) can be adjusted

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call