Abstract
We propose a Double EXponential Adaptive Threshold (DEXAT) neuron model that improves the performance of neuromorphic Recurrent Spiking Neural Networks (RSNNs) by providing faster convergence, higher accuracy and a flexible long short-term memory. We present a hardware efficient methodology to realize the DEXAT neurons using tightly coupled circuit-device interactions and experimentally demonstrate the DEXAT neuron block using oxide based non-filamentary resistive switching devices. Using experimentally extracted parameters we simulate a full RSNN that achieves a classification accuracy of 96.1% on SMNIST dataset and 91% on Google Speech Commands (GSC) dataset. We also demonstrate full end-to-end real-time inference for speech recognition using real fabricated resistive memory circuit based DEXAT neurons. Finally, we investigate the impact of nanodevice variability and endurance illustrating the robustness of DEXAT based RSNNs.
Highlights
We propose a Double EXponential Adaptive Threshold (DEXAT) neuron model that improves the performance of neuromorphic Recurrent Spiking Neural Networks (RSNNs) by providing faster convergence, higher accuracy and a flexible long short-term memory
We propose a new DEXAT neuron model for use in RSNN
System-level simulations of Long Short-Term Spiking Neural Network (LSNN) network with DEXAT neurons achieved a test accuracy of 96.1% for classification on sequential MNIST (SMNIST) dataset and 91% on Google Speech Commands (GSC) dataset
Summary
We propose a Double EXponential Adaptive Threshold (DEXAT) neuron model that improves the performance of neuromorphic Recurrent Spiking Neural Networks (RSNNs) by providing faster convergence, higher accuracy and a flexible long short-term memory. If RSNNs with spike- based temporal computation are to perform better on sequential tasks, it is essential that they get the capability of Long Short-Term Memory (LSTM) cells as stable working memory/memory states In this regard, in a recent theoretical work[4] authors have shown that inclusion of Adaptive-Leaky Integrate and Fire (ALIF) neurons in RSNN can improve their computational capabilities. In a recent theoretical work[4] authors have shown that inclusion of Adaptive-Leaky Integrate and Fire (ALIF) neurons in RSNN can improve their computational capabilities Such neurons are used in implementing RSNN that can learn through hardware friendly algorithms like e-prop[5]. We demonstrate full end to end RSNN using fabricated resistive memory based DEXAT neurons for live speech recognition application on GSC dataset
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have