Abstract

We present a single-shot incoherent light imaging method for simultaneously observing both amplitude and phase without any imaging optics, based on machine learning. In the proposed method, an object with a complex-amplitude field is illuminated with incoherent light and is captured by an image sensor with or without a coded aperture. The complex-amplitude field of the object is reconstructed from a single captured image using a state-of-the-art deep convolutional neural network, which is trained with a large number of input and output pairs. In experimental demonstrations, the proposed method was verified with a handwritten character database, and the effect of a coded aperture printed on an overhead projector film in the reconstruction was examined. Our method has advantages over conventional wavefront sensing techniques using incoherent light, namely simplification of the optical hardware and improved measurement speed. This study shows the importance and practical impact of machine learning techniques in various fields of optical sensing.

Highlights

  • An optical field is expressed as a complex-amplitude, which describes both the amplitude and phase of a light wave, in wave optics [1]

  • We presented a method for single-shot, lensless complexamplitude imaging with incoherent light and no imaging optics

  • We demonstrated the proposed method experimentally with a coded aperture (CA) implemented on an OHP film using handwritten datasets

Read more

Summary

Introduction

An optical field is expressed as a complex-amplitude, which describes both the amplitude and phase of a light wave, in wave optics [1]. DH basically assumes coherent illumination, such as a laser, but several incoherent DH methods have been proposed [6,7,8,9,10] One issue with these incoherent DH methods is a tradeoff between the number of shots and the space–bandwidth product due to the use of a spatial or temporal reference carrier. Singleshot complex-amplitude/phase imaging methods based on the Shack–Hartmann wavefront sensor have utilized a lens array or a holographic optical element to observe a stereo or plenoptic image, and this results in a tradeoff between the spatial and angular resolutions [12,13,14,15,16,17] These methods involve a compromise between the space–bandwidth product and the simplicity of the optical setup. This approach extends the range of applications of complex-amplitude imaging and shows the importance of machine learning techniques in optical sensing

Method
Experimental demonstration
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.