Abstract

In a recent work, we reported on an Extreme Learning Machine (ELM) implemented in a photonic system based on frequency multiplexing, where each wavelength of the light encodes a different neuron state. In the present work, we experimentally demonstrate the parallelization potentialities of this approach. We show that multiple frequency combs centered on different frequencies can copropagate in the same system, resulting in either multiple independent ELMs executed in parallel on the same substrate or a single ELM with an increased number of neurons. We experimentally tested the performances of both these operation modes on several classification tasks, employing up to three different light sources, each of which generates an independent frequency comb. We also numerically evaluated the performances of the system in configurations containing up to 15 different light sources.

Highlights

  • Neural networks are usually trained by tuning the weights of each connection, which requires time and power and expensive algorithms such as gradient descent

  • An Extreme Learning Machine (ELM) is a particular kind of randomized feed-forward neural network composed of a single hidden layer, where only output weights are trained; this allows for the formulation of the training as a linear problem [2–5]

  • We describe two possible operating modes for such system: either each comb can be employed to perform a different computation, which corresponds to running multiple ELMs on the same substrate, or the combs can be interpreted as different parts of the same neuron layer, which corresponds to executing a single ELM with an increased number of neurons with respect to our previous implementation

Read more

Summary

Introduction

Neural networks are usually trained by tuning the weights of each connection, which requires time and power and expensive algorithms such as gradient descent. The power of this approach consists in the fact that the system does not need to to be modified in any way since it represents the internal, untrained connections of the network, while the output weights, which have to be trained, can be set by the readout mechanism. The input layer is encoded in the combs Ein by the attenuations Fin , and the information contained therein are mixed by PM2 , generating the new set of combs Ehidden that represent the hidden layer This mixing is linear and consists in an interference of comb lines (Equation (5)). (Section 4.2), we show that the limited dimensionality enhancement does not hinder performances when compared to an all-to-all mixing scheme

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.