Abstract

Recent advances in neuromorphic computing have established a computational framework that removes the processor-memory bottleneck evident in traditional von Neumann computing. Moreover, contemporary photonic circuits have addressed the limitations of electrical computational platforms to offer energy-efficient and parallel interconnects independently of the distance. When employed as synaptic interconnects with reconfigurable photonic elements, they can offer an analog platform capable of arbitrary linear matrix operations, including multiply–accumulate operation and convolution at extremely high speed and energy efficiency. Both all-optical and optoelectronic nonlinear transfer functions have been investigated for realizing neurons with photonic signals. A number of research efforts have reported orders of magnitude improvements estimated for computational throughput and energy efficiency. Compared to biological neural systems, achieving high scalability and density is challenging for such photonic neuromorphic systems. Recently developed tensor-train-decomposition methods and three-dimensional photonic integration technologies can potentially address both algorithmic and architectural scalability. This tutorial covers architectures, technologies, learning algorithms, and benchmarking for photonic and optoelectronic neuromorphic computers.

Highlights

  • Artificial Intelligence (AI) and Machine Learning (ML) have transformed our everyday lives—everything from scientific computing to shopping and entertainment

  • A number of research efforts have reported orders of magnitude improvements estimated for computational throughput and energy efficiency

  • Despite the computational efficiency and promise of out-of-plane and fiber-based approaches, the remainder of this tutorial focuses on the construction of in-plane, integrated photonic neural networks that more closely approximate the physical scales of biological systems

Read more

Summary

INTRODUCTION

Artificial Intelligence (AI) and Machine Learning (ML) have transformed our everyday lives—everything from scientific computing to shopping and entertainment. Von Neumann utilized synapses, neurons, and neural networks in this 1945 report to explain his proposed architecture, and predicted its limitations— called the von Neumann bottleneck [3]—by stating that “the main bottleneck of an automatic very high-speed computing device lies: At the memory.” Because of this limitation, relatively simple tasks such as learning and pattern recognition require a large amount of data movement (including moving the weight values) between the processor and the memory (across the bottleneck). Others more broadly discuss interconnect technology, network topology, neuron design, and algorithm choices at differing levels of depth [40]–[44] This tutorial aims to concisely and comprehensively unify each of the aforementioned aspects of photonic neuromorphic design and cover them at their most fundamental level before describing how they relate to the computational abilities of the system; references to other reviews will be given for implementation and other details not fully addressed here.

Rationale for Optoelectronic and Photonic Neuromorphic Computing
Spiking vs Non-Spiking Photonic Neural Networks
Network Topology
Building Blocks of Photonic Neuromorphic Computing Systems
Forming Reconfigurable Optical Synapses
Assembling Photonic Synaptic Meshes
Photonic and Optoelectronic Nonlinear Neurons
Learning
Tensor Train Decomposition
BENCHMARKING METRICS
SUMMARY
VIII. REFERENCES
Findings
A Perception-Cognition-Action Model Using
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call