Abstract

We demonstrate the use of a wavelength converter, based on cross-gain modulation in a semiconductor optical amplifier (SOA), as a nonlinear function co-integrated within an all-optical neuron realized with SOA and wavelength-division multiplexing technology. We investigate the impact of fully monolithically integrated linear and nonlinear functions on the all-optical neuron output with respect to the number of synapses/neuron and data rate. Results suggest that the number of inputs can scale up to 64 while guaranteeing a large input power dynamic range of 36 dB with neglectable error introduction. We also investigate the performance of its nonlinear transfer function by tuning the total input power and data rate: The monolithically integrated neuron performs about 10% better in accuracy than the corresponding hybrid device for the same data rate. These all-optical neurons are then used to simulate a 64:64:10 two-layer photonic deep neural network for handwritten digit classification, which shows an 89.5% best-case accuracy at 10 GS/s. Moreover, we analyze the energy consumption for synaptic operation, considering the full end-to-end system, which includes the transceivers, the optical neural network, and the electrical control part. This investigation shows that when the number of synapses/neuron is >18, the energy per operation is <20 pJ (6 times higher than when considering only the optical engine). The computation speed of this two-layer all-optical neural network system is 47 TMAC/s, 2.5 times faster than state-of-the-art graphics processing units, while the energy efficiency is 12 pJ/MAC, 2 times better. This result underlines the importance of scaling photonic integrated neural networks on chip.

Highlights

  • Massive volume of data demands wider capacity and higher speed of information processing

  • We analyze the performance of an all-optical neural network structure with wavelength-division multiplexing (WDM) connectivity and semiconductor optical amplifier (SOA)-based all-optical neurons

  • The linear neural network can be scaled as a function of WDM signals for multi-synapsis neurons: the linear processing unit can scale up to 64 c while guaranteeing a large input dynamic range under neglectable error introduction

Read more

Summary

INTRODUCTION

Massive volume of data demands wider capacity and higher speed of information processing. The new computing paradigm of nonvon-Neumann architectures has begun to unfold, leading to the development of large neuromorphic machines that exceed the energy and size-efficiency walls of classical platforms, because of their inherent parallel computational schemes. These deployments are mainly based on the spiking architectural model that very recently have shown the potential to outperform multi-layer perceptron (MLP) models.. A number of photonic accelerators have been proposed based on discrete optical components and micro-optics as well as on photonic integrated devices.13–15 This emerging technology is capable of producing high processing bandwidths with high power efficiency.. V, we analyze the energy consumption of the complete end-to-end system

ALL-OPTICAL SOA-BASED DNN
SOA-BASED INTEGRATED ALL-OPTICAL NEURON
Integrated SOA-based non-linear function
All-optical monolithically integrated neuron
MNIST DATASET CLASSIFICATION WITH AN SOA-BASED ALL-OPTICAL NEURAL NETWORK
SYSTEM ENERGY CONSUMPTION ANALYSIS
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.