Abstract

Mask-based lensless imaging is an emerging imaging modality, which replaces the lenses with optical elements and makes use of computation to reconstruct images from the multiplexed measurements. Most existing reconstruction algorithms are implemented assuming that the forward imaging process is a convolution operation, with a point spread function based on the system model. In reality, there is model mismatch, leading to inferior image reconstruction results. In this paper, we investigate the impact of model mismatch in mask-based lensless imaging and for the first time, illustrate the accumulated artifacts and information loss due to mismatch error in the state-of-the-art approaches, which perform model-based reconstruction and learning-based enhancement in separate stages. To overcome this, we develop a novel physics-informed deep learning architecture that aims at addressing such mismatch error. The proposed hybrid reconstruction network consists of both unrolled model-based optimization to apply system physics and deep learning layers for model correction. Besides a cascaded enhancement network, we introduce a data-driven branch in parallel, making use of both input measurement and all intermediate outputs from the model-based layers to correct the bias and compensate for the information loss due to model mismatch. The effectiveness and robustness of the proposed model mismatch compensation network, referred to as MMCN, is demonstrated on real lensless images. Experimental results show noticeably better performance for MMCN compared with the alternative methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call