Abstract

Deep learning convolutional neural networks generally involve multiple-layer, forward-backward propagation machine-learning algorithms that are computationally costly. In this work, we demonstrate an alternative scheme to convolutional neural nets that reconstructs an original image from its optically preprocessed, Fourier-encoded pattern. The scheme is much less computationally demanding and more noise robust, and thus suited for high-speed and low-light imaging. We introduce a vortex phase transform with a lenslet-array to accompany shallow, dense, “small-brain” neural networks. Our single-shot coded-aperture approach exploits the coherent diffraction, compact representation, and edge enhancement of Fourier-transformed spiral phase gradients. With vortex encoding, a small brain is trained to deconvolve images at rates 5–20 times faster than those achieved with random encoding schemes, where greater advantages are gained in the presence of noise. Once trained, the small brain reconstructs an object from intensity-only data, solving an inverse mapping without performing iterations on each image and without deep learning schemes. With vortex Fourier encoding, we reconstruct MNIST Fashion objects illuminated with low-light flux ( 5 nJ / cm 2 ) at a rate of several thousand frames per second on a 15 W central processing unit. We demonstrate that Fourier optical preprocessing with vortex encoders achieves similar accuracies and speeds 2 orders of magnitude faster than convolutional neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call