Abstract

Neural networks, having achieved breakthroughs in many applications, require extensive convolutions and matrix-vector multiplication operations. To accelerate these operations, benefiting from power efficiency, low latency, large bandwidth, massive parallelism, and CMOS compatibility, silicon photonic neural networks have been proposed as a promising solution. In this study, we propose a scalable architecture based on a silicon photonic integrated circuit and optical frequency combs to offer high computing speed and power efficiency. A proof-of-concept silicon photonics neuromorphic accelerator based on integrated coherent transmit–receive optical sub-assemblies, operating over 1TOPS with only one computing cell, is experimentally demonstrated. We apply it to process fully connected and convolutional neural networks, achieving a competitive inference accuracy of up to 96.67% in handwritten digit recognition compared to its electronic counterpart. By leveraging optical frequency combs, the approach’s computing speed is possibly scalable with the square of the cell number to realize over 1 Peta-Op/s. This scalability opens possibilities for applications such as autonomous vehicles, real-time video processing, and other high-performance computing tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.