Abstract

Key to recent successes in the field of artificial intelligence (AI) has been the ability to train a growing number of parameters which form fixed connectivity matrices between layers of nonlinear nodes. This “deep learning” approach to AI has historically required an exponential growth in processing power which far exceeds the growth in computational throughput of digital hardware as well as trends in processing efficiency. New computing paradigms are therefore required to enable efficient processing of information while drastically improving computational throughput. Emerging strategies for analog computing in the photonic domain have the potential to drastically reduce latency but require the ability to modify optical processing elements according to the learned parameters of the neural network. In this point-of-view article, we provide a forward-looking perspective on both optical and electrical memories coupled to integrated photonic hardware in the context of AI. We also show that for programmed memories, the READ energy-latency-product of photonic random-access memory (PRAM) can be orders of magnitude lower compared to electronic SRAMs. Our intent is to outline path for PRAMs to become an integral part of future foundry processes and give these promising devices relevance for emerging AI hardware.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call