Abstract

Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.

Highlights

  • S PIKING neural networks (SNNs) are biology’s solution for fast and versatile information processing

  • We found that while a linear support vector machine (SVM) readily overfitted the data in the case of Spiking Heidelberg digits (SHD), its test performance only marginally exceeded the 55% accuracy mark [Fig. 3(a)]

  • In this article, we introduced two new public domain spikebased classification data sets to facilitate the quantitative comparison of SNNs

Read more

Summary

Introduction

S PIKING neural networks (SNNs) are biology’s solution for fast and versatile information processing. To instantiate such functional connectivity in-silico, a growing number of SNNs training algorithms have been developed [2]–[8] both for conventional computers and neuromorphic hardware [9]–[14]. This diversity of learning algorithms urgently calls for principled means to compare them. In this article, we seek to fill this gap by introducing two new broadly applicable classification data sets for SNNs

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call