Abstract

Deep artificial neural networks apply principles of the brain’s information processing that led to breakthroughs in machine learning spanning many problem domains. Neuromorphic computing aims to take this a step further to chips more directly inspired by the form and function of biological neural circuits, so they can process new knowledge, adapt, behave, and learn in real time at low power levels. Despite several decades of research, until recently, very few published results have shown that today’s neuromorphic chips can demonstrate quantitative computational value. This is now changing with the advent of Intel’s Loihi, a neuromorphic research processor designed to support a broad range of spiking neural networks with sufficient scale, performance, and features to deliver competitive results compared to state-of-the-art contemporary computing architectures. This survey reviews results that are obtained to date with Loihi across the major algorithmic domains under study, including deep learning approaches and novel approaches that aim to more directly harness the key features of spike-based neuromorphic hardware. While conventional feedforward deep neural networks show modest if any benefit on Loihi, more brain-inspired networks using recurrence, precise spike-timing relationships, synaptic plasticity, stochasticity, and sparsity perform certain computation with orders of magnitude lower latency and energy compared to state-of-the-art conventional approaches. These compelling neuromorphic networks solve a diverse range of problems representative of brain-like computation, such as event-based data processing, adaptive control, constrained optimization, sparse feature regression, and graph search.

Highlights

  • Neuromorphic computing seeks to understand and adapt fundamental properties of neural architectures found in nature in order to discover a new model of computer architecture, one that is natively suited for classes of brain-inspired computation that challenge the von Neumann model

  • All spike messages carry events related to the activity of a single-source neuron, putting no pressure on the architecture or algorithms to maintain activity over blocks of neurons with shared connectivity. These properties place Loihi in a diametrically opposite architectural regime compared to state-of-the-art von Neumann processors and deep learning accelerators, whose wide datapaths, deep pipelines, and high memory access latencies demand dense, deep, and predictably active networks in order to achieve high performance and efficiency

  • We focus on examples that have been rigorously benchmarked in accuracy, energy, and time to solution against comparable or equivalent artificial neural networks (ANNs) solutions running on conventional architectures

Read more

Summary

INTRODUCTION

Neuromorphic computing seeks to understand and adapt fundamental properties of neural architectures found in nature in order to discover a new model of computer architecture, one that is natively suited for classes of brain-inspired computation that challenge the von Neumann model These properties include fully integrated memory-and-computing, fine-grain parallelism, pervasive feedback and recurrence, massive network fan-outs, low precision and stochastic computation, and continuously adaptive processes commonly associated with learning. Despite promising quantitative experiments [5], [6], these systems have struggled to demonstrate value as a practical tool for neuroscience discovery [7] This illustrates the challenge that the neuromorphic community faces: even with fundamental architectural advantages, it is difficult for research systems to match the mature products of conventional computing that have been optimized over generations and even co-optimized with the underlying manufacturing technology. Interested readers are encouraged to refer to prior publications [13], [15], [16] and Intel’s online resources for further details

Loihi Chip
Systems
Software
Deep SNN Conversion
Direct Deep SNN Training
Online Approximations of Backpropagation
ATTRACTORNETWORKS
Locally Competitive Algorithm
Dynamic Neural Fields
COMPUTINGWITHTIME
Nearest Neighbor Search
Graph Search
Stochastic Constrained Optimization
Event-Based Sensing and Perception
Odor Recognition and Learning
Closed-Loop Control for Robotics
Simultaneous Localization and Mapping
Other Applications
Online Learning
Sensor Integration
Robotics
Planning, Optimization, and Reasoning
Programming Model
Economic Viability
Findings
VIII. CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call