Abstract

To alleviate the problems of long training time and computational resource consumption in diffractive deep neural networks (D2NNs), we present the 2bit nonlinear diffractive deep neural network (2bit ND2NN) model, which employs the concept of quantized neural networks. 2bit ND2NN converts the phase of each diffraction layer pixel from continuous to discrete values, i.e., the five regions of 0, π/2, π, 3π/2 and 2π. We use the formulas for phase and relative thickness to determine the thickness values of the pixels. Furthermore, the phase and amplitude of neurons in the 2bit ND2NN model are both considered as learnable parameters. We use a revised formula to determine the size of neurons in the model, and we determine the diffraction grating spacing, the number of grating layers, the pixel size and the pixel counts through ablation experiments. Experimental results for image classification indicate that the highest accuracy achieved by 2bit ND2NN on the MNIST and Fashion-MNIST datasets is 97.88 % and 89.28 %, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call