Abstract
Key to recent successes in the field of artificial intelligence (AI) has been the ability to train a growing number of parameters which form fixed connectivity matrices between layers of nonlinear nodes. This “deep learning” approach to AI has historically required an exponential growth in processing power which far exceeds the growth in computational throughput of digital hardware as well as trends in processing efficiency. New computing paradigms are therefore required to enable efficient processing of information while drastically improving computational throughput. Emerging strategies for analog computing in the photonic domain have the potential to drastically reduce latency but require the ability to modify optical processing elements according to the learned parameters of the neural network. In this point-of-view article, we provide a forward-looking perspective on both optical and electrical memories coupled to integrated photonic hardware in the context of AI. We also show that for programmed memories, the READ energy-latency-product of photonic random-access memory (PRAM) can be orders of magnitude lower compared to electronic SRAMs. Our intent is to outline path for PRAMs to become an integral part of future foundry processes and give these promising devices relevance for emerging AI hardware.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal of Selected Topics in Quantum Electronics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.