Abstract

In hardware-based spiking neural networks (SNNs), the conversion of analog input data into the arrival time of an input pulse is regarded as a good candidate for the encoding method due to its bio-plausibility and power-efficiency. In this work, we trained an SNN encoded by time to first spike (TTFS) and performed an inference process using the behavior of the fabricated TFT-type flash synaptic device. The exponentially decaying synaptic current model required in the inference process was implemented by reading devices in the subthreshold region using triangle pulses. In a high-level system simulation, the TTFS-SNN (two-layer MLP with 512 hidden neurons) reached a high accuracy of 97.94%. Compared to conventional rate-encoded SNNs, TTFS-SNN made 2.9 times faster judgment and consumed ~10 times less energy in the inference process. Additionally, to use the network in a more stable condition, we propose a method to operate it using a rectangle pulse in the saturation region of the synaptic device. The distortion caused by this approximation was minimized by shortening the pulse width. As a result, the modified inference system showed an accuracy of 97.36%, and the prediction time and energy consumption were reduced 3.97- and 83.04-times when compared to those of the rate-SNN. Finally, we analyzed the sensitivity of the network performance due to unexpected issues that may occur in the hardware system and thus explained the competitiveness of the proposed synaptic behavior in the saturation region.

Highlights

  • SNNs have become regarded as a successful computing system and have been widely studied due to their compatibility with hardware implementation, enabling the parallel operation of massive data and low-power computing [1], [2]

  • The SNN described in the previous section should be operated in the subthreshold region due to the exponentially decaying current model, which is very sensitive to unexpected device variations and noise

  • An SNN was trained using time to first spike (TTFS)-encoded data, and it reached an accuracy of 97.94% in a two-layer MLP (512 hidden neurons), which is higher than the results of previous work [11] encoded with TTFS data

Read more

Summary

Introduction

SNNs have become regarded as a successful computing system and have been widely studied due to their compatibility with hardware implementation, enabling the parallel operation of massive data and low-power computing [1], [2]. As one of the candidate methods for training SNNs, it was studied to transfer the weights trained by the ANN to the SNN [3], [4]. The conventional ReLU activation function can be approximated by a combination of the integrate and fire (I&F) neurons and the rate-encoding method that expresses the analog-valued input as the frequency of the input pulse in the SNN [5]. Due to this approximation, the SNN can achieve the performance of a highly advanced ANN without significant degradation and can greatly improve the learning speed by using.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call