Abstract

Recent advances in Deep Learning (DL) fueled the interest in developing neuromorphic hardware accelerators that can improve the computational speed and energy efficiency of existing accelerators. Among the most promising research directions towards this is photonic neuromorphic architectures, which can achieve femtojoule per MAC efficiencies. Despite the benefits that arise from the use of neuromorphic architectures, a significant bottleneck is the use of expensive high-speed and precision analog-to-digital (ADCs) and digital-to-analog conversion modules (DACs) required to transfer the electrical signals, originating from the various Artificial Neural Networks (ANNs) operations (inputs, weights, etc.) in the photonic optical engines. The main contribution of this paper is to study quantization phenomena in photonic models, induced by DACs/ADCs, as an additional noise/uncertainty source and to provide a photonics-compliant framework for training photonic DL models with limited precision, allowing for reducing the need for expensive high precision DACs/ADCs. The effectiveness of the proposed method is demonstrated using different architectures, ranging from fully connected and convolutional networks to recurrent architectures, following recent advances in photonic DL.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.