Abstract

In the post Moore's era, conventional electronic digital computers have encountered escalating challenges in supporting massively parallel and energy-hungry artificial intelligence (AI) workloads, which raises a high demand for a revolutionary AI computing solution. Optical neural network (ONN) is a promising hardware platform that could represent a paradigm shift in efficient AI with its ultra-fast speed, high parallelism, and low energy consumption. In recent years, efforts have been made to facilitate the ONN design stack and push forward the practical application of optical neural accelerators. In this paper, we present a holistic solution with state-of-the-art cross-layer co-design methodologies towards scalable, robust, and self-learnable integrated photonic neural accelerator designs across the circuit, architecture, and algorithm levels. We will introduce (1) an area-efficient butterfly-style ONN architecture design beyond traditional general tensor units, (2) model-circuit co-optimization that boosts variation-tolerance and endurance of photonic in-memory computing, (3) efficient ONN on-chip training algorithms that enable self-learnable photonic AI engines, and (4) AI-assisted automated photonic integrated circuit (PIC) design methodology beyond manual PIC designs in footprint, expressivity, and noise-tolerance. Our proposed ONN design stack is integrated into our open-source PyTorch-centric ONN library TorchONN to construct customized photonic AI engine designs and perform high-performance ONN training and optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call