Abstract

AbstractLensless image reconstruction is an ill-posed inverse problem in computational imaging, having several applications in machine vision. Existing approaches rely on large datasets for learning to perform deconvolution and are often specific to the point spread function of a particular lensless imager. Generating pairs of lensless images and their corresponding ground truths requires a specialized laboratory setup, thus making the dataset collection procedure challenging. We propose a reconstruction method using untrained neural networks that relies on the underlying physics of lensless image generation. We use an encoder-decoder network for reconstructing the lensless image for a known PSF. The same network can predict the PSF when supplied with a single example of input and ground-truth pair, thus acting as a one-time calibration step for any lensless imager. We used a physics-guided consistency loss function to optimize our model to perform reconstruction and PSF estimation. Our model generates accurate non-blind reconstructions with a PSNR of 24.55 dB.KeywordsLensless image reconstructionUntrained neural networksComputational imaging

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.