Abstract

Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.

Highlights

  • Training and inference with deep neural networks (DNNs), commonly known as deep learning (LeCun et al, 2015; Schmidhuber, 2015; Goodfellow et al, 2016), has contributed to many of the spectacular success stories of artificial intelligence (AI) in recent years (Goodfellow et al, 2014; Amodei et al, 2016; He et al, 2016; Silver et al, 2016)

  • Researchers from the domains of machine learning, computational neuroscience, neuromorphic engineering, and embedded systems design have tried to bridge the gap between the big success of DNNs in AI applications and the promise of spiking neural networks (SNNs) (Maass, 1997; Ponulak and Kasinski, 2011; Grüning and Bohte, 2014)

  • Event-based vision and audio sensors (Lichtsteiner et al, 2008; Posch et al, 2014; Liu et al, 2015) have reached an increasingly mature level, and deep SNNs are one of the most promising concepts for processing such inputs efficiently (Tavanaei et al, 2018). This line of research has coincided with an increased interest in efficient hardware implementations for conventional DNNs, since the massive hunger for computational resources has turned out to be a major obstacle as deep learning makes its way toward real-world applications such as automated driving, robotics, or the internet of things (IoT)

Read more

Summary

INTRODUCTION

Training and inference with deep neural networks (DNNs), commonly known as deep learning (LeCun et al, 2015; Schmidhuber, 2015; Goodfellow et al, 2016), has contributed to many of the spectacular success stories of artificial intelligence (AI) in recent years (Goodfellow et al, 2014; Amodei et al, 2016; He et al, 2016; Silver et al, 2016). Event-based vision and audio sensors (Lichtsteiner et al, 2008; Posch et al, 2014; Liu et al, 2015) have reached an increasingly mature level, and deep SNNs are one of the most promising concepts for processing such inputs efficiently (Tavanaei et al, 2018) This line of research has coincided with an increased interest in efficient hardware implementations for conventional DNNs, since the massive hunger for computational resources has turned out to be a major obstacle as deep learning makes its way toward real-world applications such as automated driving, robotics, or the internet of things (IoT).

What Is a Deep Spiking Neural Network?
Advantages of Deep SNNs
Limitations of Deep SNNs
INFERENCE WITH DEEP SNNS
TRAINING OF DEEP SNNS
Supervised learning with spikes
Binary Deep Neural Networks
Conversion of Deep Neural Networks
Training of Constrained Networks
Supervised Learning With Spikes
Local Learning Rules
NEUROMORPHIC HARDWARE
Inference on Neuromorphic Hardware
On-Chip Learning
APPLICATIONS
DISCUSSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.