Abstract

Over the past 2 decades, researches in artificial neural networks (ANNs) and deep learning have flourished and enabled the applications of artificial intelligence (AI) in image recognition, natural language processing, medical image analysis, molecular and material science, autopilot and so on. As the application scenarios for AI become more complex, massive perceptual data need to be processed in real-time. Thus, the traditional electronic integrated chips for executing the calculation of ANNs and deep learning algorithms are faced with higher requirements for computation speed and energy consumption. However, due to the unsustainability of Moore’s Law and the failure of the Dennard’s scaling rules, the growth of computing power of the traditional electronic integrated chips based on electronic transistors and von Neumann architecture could difficultly match the rapid growth of data volume. Enabled by silicon-based optoelectronics, analog optical computing can support sub-nanosecond delay and ∼fJ energy consumption efficiency, and provide an alternative method to further greatly improve computing resources and to accelerate deep learning tasks. In Chapter 1, the challenges of electronic computing technologies are briefly explained, and potential solutions including analog optical computing are introduced. Then, separated by four photonic platforms, including coherent integration platform, incoherent integration platform, space-propagation optical platform, and optical fiber platform, the recent important research progresses in analog optical computing are outlined in Chapter 2. Then, the nonlinearity and training algorithm for analog optical computing are summarized and discussed in Chapter 3. In Chapter 4, the prospects and challenges of analog optical computing are pointed out.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call