Abstract

Lightfield phase modulation has become an effective implementation for extending depth-of-field (DOF) of computational imaging. However, correct reconstruction of tiny details and color configuration with a fast capture is still challenging. Here, we demonstrate a high-dynamic all-focus imaging based on a proposed paradigm of constructing diffractive network encoding and electronic network decoding (DE-ED) conformation. With an efficient collaboration of physical diffractive layers and electronic convolutional layers, this learning-based model demonstrates significantly enhanced generalization and data-fitting capabilities. In experiments, the proposed method exhibits an obvious superiority in high-dynamic adaptation, autofocusing, and denoise compared with conventional methods. Specifically, an ultrahigh capture frame rate with ST <1/3000 s can be precisely adapted with only ∼5.6 mm diameter of imaging aperture under natural illumination. Several videos for proposed high-dynamic reconstruction show this method's time-efficiency and consistency. These results highlight some unique advantages of the constructed hybrid opto-electronic network model based on data-driven end-to-end learning for next-generation innovative computational imaging.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.