Abstract

This paper proposes a large-scale, energy-efficient, high-throughput, and compact tensorized optical neural network (TONN) exploiting the tensor-train decomposition architecture on an integrated III–V-on-silicon metal–oxide–semiconductor capacitor (MOSCAP) platform. The proposed TONN architecture is scalable to 1024 × 1024 synapses and beyond, which is extremely difficult for conventional integrated ONN architectures by using cascaded multi-wavelength small-radix (e.g., 8 × 8) tensor cores. Simulation experiments show that the proposed TONN uses 79× fewer Mach–Zehnder interferometers (MZIs) and 5.2× fewer cascaded stages of MZIs compared with the conventional ONN while maintaining a >95% training accuracy for Modified National Institute of Standards and Technology handwritten digit classification tasks. Furthermore, with the proven heterogeneous III–V-on-silicon MOSCAP platform, our proposed TONN can improve the footprint-energy efficiency by a factor of 1.4 × 104 compared with digital electronics artificial neural network (ANN) hardware and a factor of 2.9 × 102 compared with silicon photonic and phase-change material technologies. Thus, this paper points out the road map of implementing large-scale ONNs with a similar number of synapses and superior energy efficiency compared to electronic ANNs.

Highlights

  • Artificial neural networks (ANNs) have proven their remarkable capabilities in various tasks, including computer vision, speech recognition, machine translations, medical diagnoses, and the game of Go.1 Neuromorphic computing accelerators, such as IBM TrueNorth2 and Intel Loihi,3 have shown significantly superior performance compared with traditional central processing units (CPUs) for specific neural network tasks

  • Simulation experiments show that the proposed tensorized optical neural network (TONN) uses 79× fewer Mach–Zehnder interferometers (MZIs) and 5.2× fewer cascaded stages of MZIs compared with the conventional Optical neural networks (ONNs) while maintaining a >95% training accuracy for Modified National Institute of Standards and Technology handwritten digit classification tasks

  • With the proven heterogeneous III–V-on-silicon metal–oxide–semiconductor capacitor (MOSCAP) platform, our proposed TONN can improve the footprint-energy efficiency by a factor of 1.4 × 104 compared with digital electronics artificial neural network (ANN) hardware and a factor of 2.9 × 102 compared with silicon photonic and phase-change material technologies

Read more

Summary

INTRODUCTION

Artificial neural networks (ANNs) have proven their remarkable capabilities in various tasks, including computer vision, speech recognition, machine translations, medical diagnoses, and the game of Go. Neuromorphic computing accelerators, such as IBM TrueNorth and Intel Loihi, have shown significantly superior performance compared with traditional central processing units (CPUs) for specific neural network tasks. Aligning III–V diode laser chips to SiPh chips will induce additional coupling losses and packaging complexity, limiting energy efficiency and integration density To mitigate these two challenges, on the architecture side, tensor-train (TT) decomposed synaptic interconnections have been proposed to realize large-scale ONNs with reduced hardware resources.. This paper proposes a large-scale, energy-efficient, highthroughput, and compact tensorized ONN (TONN) architecture on a densely integrated III–V-on-silicon metal–oxide–semiconductor capacitor (MOSCAP) platform.

Principle of tensor-train decomposition
Tensor-train rank determination
Tensor-train layers
Single-wavelength implementation
Multi-wavelength implementation
Comparison between tensorized and conventional ONNs
SIMULATIONS FOR TENSORIZED OPTICAL NEURAL NETWORKS
TONN-MW ON HETEROGENEOUS III–V-ON-SILICON MOSCAP PLATFORM
Heterogeneous III–V-on-silicon MOSCAP platform
Footprint-energy efficiency
Findings
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call