Abstract

AbstractPhotonic neural networks (PNNs) have emerged as promising alternatives to traditional electronic neural networks. However, the training of PNNs, especially the chip implementation of analytic gradient descent algorithms that are recognized as highly efficient in traditional practice, remains a major challenge because physical systems are not differentiable. Although training methods such as gradient‐free and numerical gradient methods are proposed, they suffer from excessive measurements and limited scalability. State‐of‐the‐art in situ training method is also cost‐challenged, requiring expensive in‐line monitors and frequent optical I/O switching. Here, a physics‐aware analytic‐gradient training (PAGT) method is proposed that calculates the analytic gradient in a divide‐and‐conquer strategy, overcoming the difficulty induced by chip non‐differentiability in the training of PNNs. Multiple training cases, especially a generative adversarial network, are implemented on‐chip, achieving a significant reduction in time consumption (from 31 h to 62 min) and a fourfold reduction in energy consumption, compared to the in situ method. The results provide low‐cost, practical, and accelerated solutions for training hybrid photonic‐digital electronic neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call