Abstract
Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography. In this study, we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training. This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts. This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram, requiring fewer measurements in addition to being computationally faster. We validated this method by reconstructing the phase and amplitude images of various samples, including blood and Pap smears and tissue sections. These results highlight that challenging problems in imaging science can be overcome through machine learning, providing new avenues to design powerful computational imaging systems.
Highlights
Optoelectronic sensor arrays, such as charge-coupled devices (CCDs) or complementary metal-oxide-semiconductor (CMOS)-based imagers, are only sensitive to the intensity of light; phase information of the objects or the diffracted light waves cannot be directly recorded using such imagers
We report a convolutional neural network-based method, trained through deep learning[41,42], that can perform phase recovery and holographic image reconstruction using a single hologram intensity
Our deep neural network approach for phase retrieval and holographic image reconstruction is schematically described in Figure 1
Summary
Optoelectronic sensor arrays, such as charge-coupled devices (CCDs) or complementary metal-oxide-semiconductor (CMOS)-based imagers, are only sensitive to the intensity of light; phase information of the objects or the diffracted light waves cannot be directly recorded using such imagers. To improve the performance of the phase recovery and image reconstruction processes, additional intensity information is recorded, for example, by scanning the illumination source aperture[15,16,17,18], sample-to-sensor distance[19,20,21,22,23] (in some cases referred to as out-of-focus imaging24), wavelength of illumination[25,26], or phase front of the reference beam[27,28,29,30], among other methods[31,32,33,34,35,36] All these methods utilize additional physical constraints and intensity measurements to robustly retrieve the missing phase information based on an analytical and/or iterative solution that satisfies the wave equation. Some of these phase retrieval techniques have enabled discoveries in different fields[37,38,39,40]
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have