Abstract

Artificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using isomorphic binary spikes. While Spiking Neural Networks (SNNs) can be constructed by replacing the units of an ANN with spiking neurons (Cao et al., 2015; Diehl et al., 2015) to obtain reasonable performance, these SNNs use Poisson spiking mechanisms with exceedingly high firing rates compared to their biological counterparts. Here we show how spiking neurons that employ a form of neural coding can be used to construct SNNs that match high-performance ANNs and match or exceed state-of-the-art in SNNs on important benchmarks, while requiring firing rates compatible with biological findings. For this, we use spike-based coding based on the firing rate limiting adaptation phenomenon observed in biological spiking neurons. This phenomenon can be captured in fast adapting spiking neuron models, for which we derive the effective transfer function. Neural units in ANNs trained with this transfer function can be substituted directly with adaptive spiking neurons, and the resulting Adaptive SNNs (AdSNNs) can carry out competitive classification in deep neural networks without further modifications. Adaptive spike-based coding additionally allows for the dynamic control of neural coding precision: we show empirically how a simple model of arousal in AdSNNs further halves the average required firing rate and this notion naturally extends to other forms of attention as studied in neuroscience. AdSNNs thus hold promise as a novel and sparsely active model for neural computation that naturally fits to temporally continuous and asynchronous applications.

Highlights

  • With rapid advances in deep neural networks, renewed consideration is given to the question how artificial neural networks relate to the details of information processing in real biological spiking neurons

  • We construct Adaptive SNNs (AdSNNs) comprised of Adaptive Spiking Neuron models (ASN) neurons using adaptive spike-coding similar to the approach pioneered in Diehl et al (2015) to obtain high performance sparsely active spiking neural networks (SNNs)

  • Artificial Neural Networks (ANNs) are constructed with analog neural units that use the derived half-sigmoid-like transfer function f (S), both for fully connected feed-forward ANNs and for various deep convolutional neural network architectures

Read more

Summary

Introduction

With rapid advances in deep neural networks, renewed consideration is given to the question how artificial neural networks relate to the details of information processing in real biological spiking neurons. Apart from its still vastly more flexible operation, the huge spiking neural network that comprises the brain is highly energy efficient This derives in large part from its sparse activity: estimates are that neurons in mammalian brains on average only emit somewhere between 0.2 and 5 spikes per second (Attwell and Laughlin, 2001). Sensory neurons adaptively control the number of spikes that are used to efficiently cover large dynamic ranges (Fairhall et al, 2001). This adaptive behavior can be captured with fast (< 100ms) spike frequency adaptation in Leaky-Integrateand-Fire neuron models, or corresponding Spike Response Models (SRMs) (Gerstner and Kistler, 2002; Bohte, 2012; Pozzorini et al, 2013) including the Adaptive Spiking Neuron models (ASN) (Bohte, 2012). We demonstrate the effectiveness of such neurons to create powerful deep SNNs

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call