Abstract

Optical neural network (ONNs) are emerging as attractive proposals for machine-learning applications. However, the stability of ONNs decreases with the circuit depth, limiting the scalability of ONNs for practical uses. Here we demonstrate how to compress the circuit depth to scale only logarithmically in terms of the dimension of the data, leading to an exponential gain in terms of noise robustness. Our low-depth (LD)-ONN is based on an architecture, called Optical CompuTing Of dot-Product UnitS (OCTOPUS), which can also be applied individually as a linear perceptron for solving classification problems. We present both numerical and theoretical evidence showing that LD-ONN can exhibit a significant improvement on robustness, compared with previous ONN proposals based on singular-value decomposition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call