Abstract
Coherent integrated photonic neural networks (IPNNs) are increasingly being explored for rapidly growing artificial intelligence applications. However, the principal roadblocks in the scalability of IPNNs are their large area footprint and high tuning power consumption during both training and inferencing. In deep neural networks (DNNs), software techniques to prune redundant weights are often utilized to reduce resource (e.g., memory, computation, and power) overheads. However, due to the complex nature in which the software weights are mapped to the building blocks of IPNNs, prior efforts to apply existing pruning approaches to IPNNs have been ineffective. We present CHAMP and LTPrune, two novel hardware-aware pruning techniques for IPNNs. Using a case study of three IPNNs with different footprints, we show that both these methods can prune more than 99% of the phase angles (which are similar to the weight parameters in DNNs). We also analyze the performance of the pruned IPNNs under phase uncertainties and present a comparative analysis of the two methods to enable advanced hardware-software-assisted design-optimization techniques for IPNNs. To expedite pruning, we also propose HybridPrune, where CHAMP and LTPrune are used in conjunction to obtain similar network sparsity as standalone-LTPrune but with up to 78.3% fewer retraining epochs.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal of Selected Topics in Quantum Electronics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.