Abstract

Neuromorphic computing has emerged as one of the most promising paradigms to overcome the limitations of von Neumann architecture of conventional digital processors. The aim of neuromorphic computing is to faithfully reproduce the computing processes in the human brain, thus paralleling its outstanding energy efficiency and compactness. Toward this goal, however, some major challenges have to be faced. Since the brain processes information by high-density neural networks with ultra-low power consumption, novel device concepts combining high scalability, low-power operation, and advanced computing functionality must be developed. This work provides an overview of the most promising device concepts in neuromorphic computing including complementary metal-oxide semiconductor (CMOS) and memristive technologies. First, the physics and operation of CMOS-based floating-gate memory devices in artificial neural networks will be addressed. Then, several memristive concepts will be reviewed and discussed for applications in deep neural network and spiking neural network architectures. Finally, the main technology challenges and perspectives of neuromorphic computing will be discussed.

Highlights

  • The complementary metal-oxide semiconductor (CMOS) technology has sustained tremendous progress in communication and information processing since the 1960s

  • Contrary to deep neural networks (DNNs), the learning ability in spiking neural networks (SNNs) emerges via unsupervised training processes, where synapses are potentiated or depressed by bio-realistic learning rules inspired by the brain

  • In spike-timing-dependent plasticity (STDP), which was experimentally demonstrated in hippocampal cultures by Bi and Poo in 1998 [38], the synaptic weight update depends on the relative timing between the presynaptic spike and the post-synaptic spike (Figure 2a)

Read more

Summary

Introduction

The complementary metal-oxide semiconductor (CMOS) technology has sustained tremendous progress in communication and information processing since the 1960s. In the last 15 years, the Moore’s scaling law has been slowed down by two fundamental issues, namely the excessive subthreshold leakage currents and the increasing heat generated within the chip [3,4] To overcome these barriers, new advances have been introduced, including the adoption of high-k materials as the gate dielectric [5], the redesign of the transistor with multigate structures [6,7], and 3D integration [8]. Solving the memory bottleneck requires a paradigm shift in architecture, where computation is executed in situ within the data by exploiting, e.g., the ability of memory arrays to implement matrix-vector multiplication (MVM) [10,11] This novel architectural approach is referred to as in-memory computing, which provides the basis for several outstanding applications, such as pattern classification [12,13], analogue image processing [14], and the solution of linear systems [15,16].

Neuromorphic Computing Concepts
Memory
Physical
Memory Transistors as Synaptic Devices in Artificial Neural Networks
Memristive Technologies
Memristive Devices with 2-Terminal Structure
Memristive Devices with 2‐Terminal Structure
Memristive Devices with Three-Terminal Structure
Memristive Neuromorphic Networks
DNNs with Memristive Synapses
SNNs with Memristive Synapses
Findings
Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call