Abstract

To alleviate the problems of long training time and computational resource consumption in diffractive deep neural networks (D2NNs), we present the 2bit nonlinear diffractive deep neural network (2bit ND2NN) model, which employs the concept of quantized neural networks. 2bit ND2NN converts the phase of each diffraction layer pixel from continuous to discrete values, i.e., the five regions of 0, π/2, π, 3π/2 and 2π. We use the formulas for phase and relative thickness to determine the thickness values of the pixels. Furthermore, the phase and amplitude of neurons in the 2bit ND2NN model are both considered as learnable parameters. We use a revised formula to determine the size of neurons in the model, and we determine the diffraction grating spacing, the number of grating layers, the pixel size and the pixel counts through ablation experiments. Experimental results for image classification indicate that the highest accuracy achieved by 2bit ND2NN on the MNIST and Fashion-MNIST datasets is 97.88 % and 89.28 %, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.