Abstract

<h2>Abstract</h2> Spiking neural networks (SNNs) have recently gained large interest for edge-AI applications due to their low latency and ultra-low energy consumption. Unlike DNNs, SNNs communicate information using spike trains. As the derivative of spike trains are highly ill-defined, the use of surrogate gradients has been proposed as an efficient method for training SNNs. Still, the lack of open-source SNN softwares and the limited range of demonstrated SNN applications slows down a wider SNN adoption. We release our ConvSNN framework, demonstrating the novel applicability of quantized-weight SNNs for radar gesture recognition. Our framework will facilitate future research in the SNN area.

Highlights

  • In recent years, spiking neural networks (SNNs) have emerged as a new event-based computing paradigm

  • A greater adoption of SNNs has been slowed down, mainly due to (1) the small number of open-source SNN frameworks, and (2) the limited number of demonstrated applications. We address both problems by releasing our convolutional SNN (ConvSNN) framework, targeting the novel use-case of SNN-based radar gesture recognition

  • Our release software demonstrate the applicability of a resource-constrained ConvSNN with 4-bit weights on two different radar gesture recognition dataset [6,8], achieving more than 91% of accuracy

Read more

Summary

Introduction

In recent years, spiking neural networks (SNNs) have emerged as a new event-based computing paradigm (as opposed to classical framebased networks). In contrast to the continuous activation functions (e.g., ReLU) used in classical deep neural networks (DNNs), SNNs make use of discontinuous, spiking activation function, which code information as spike trains (Dirac combs). This leads to ill-defined gradients throughout the network, prohibiting the direct use of error back-propagation (backprop). Our release software demonstrate the applicability of a resource-constrained ConvSNN with 4-bit weights (typical bit width in neuromorphic processors [7]) on two different radar gesture recognition dataset [6,8], achieving more than 91% of accuracy. Ready for implementation in the growing number of ultra-low-power neuromorphic processors [1]

Description
Findings
Impact
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call