Abstract

The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.

Highlights

  • A recent grand challenge in semiconductor technology urges researchers to “Create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain [1].” Artificial Intelligence (AI) techniques such as deep neural networks, or deep learning, have found widespread success when applied to several problems including image and video interpretation, speech and natural language processing, and medical diagnostics [2]

  • We envisage an equivalent spiking neural networks (SNNs) that can achieve classification accuracy within 1% error as that of the deep neural network trained on a graphics processing units (GPUs)

  • This article provides a review of the application of RRAM synapses to mixed-signal neuromorphic computing and challenges involved in their interfacing with Complementary Metal Oxide Semiconductor (CMOS) neuron circuits

Read more

Summary

Introduction

A recent grand challenge in semiconductor technology urges researchers to “Create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain [1].” Artificial Intelligence (AI) techniques such as deep neural networks, or deep learning, have found widespread success when applied to several problems including image and video interpretation, speech and natural language processing, and medical diagnostics [2]. The current explosion in widespread deployment of deep-learning applications is expected to hit a power-performance wall with—(1) plateauing in Complementary Metal Oxide Semiconductor (CMOS) scaling, and (2) limits set for energy consumption in the Cloud. These deep learning implementations take long computing cluster days to train a network for realistic applications. The unique contribution of this review article is the focus on the interfacing of mixed-signal circuits with emerging synaptic devices and discussion on the resulting design considerations that impact the overall energy-efficiency and scalability of large-scale NeuSoCs. In addition, a survey of recent learning algorithms and their associated challenges is presented for the realizing of deep learning in NeuSoCs. This article is organized as follows.

Digital Neuromorphic Platforms
Subthreshold Analog Neuromorphic Platforms
Neuromorphic Platforms Using Floating-Gate and Phase Change Memories
Nanoscale Emerging Devices
Mixed-Signal Neuromorphic Architecture
Crossbar Networks
Event-Driven Neurons with Localized Learning
Spike-Based Neural Learning Algorithms
Challenges with Emerging Devices as Synapses
Bio-Inspiration for Higher-Resolution Synapses
Compound Synapse with Axonal and Dendritic Processing
Modified CMOS Neuron with Dendritic Processing
Energy-Efficiency of Neuromorphic SoCs
Towards Large-Scale Neuromorphic SoCs
Conclusions
Findings
32. TN-12-30
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.