Abstract

Free-space diffractive optical networks are a class of trainable optical media that are currently being explored as a novel hardware platform for neural engines. The training phase of such systems is usually performed in a computer and the learned weights are then transferred onto optical hardware (‘ex-situ training’). Although this process of weight transfer has many practical advantages, it is often accompanied by performance degrading faults in the fabricated hardware. Being analog systems, these engines are also subject to performance degradation due to noises in the inputs and during optoelectronic conversion. Considering diffractive optical networks trained for image classification tasks on standard datasets, we numerically study the performance degradation arising out of weight faults and injected noises and methods to ameliorate these effects. Training regimens based on intentional fault and noise injection during the training phase are only found marginally successful at imparting fault tolerance or noise immunity. We propose an alternative training regimen using gradient based regularization terms in the training objective that are found to impart some degree of fault tolerance and noise immunity in comparison to injection based training regimen.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call