Abstract

Abstract In recent years, convolutional neural networks (CNNs) have been successfully applied to reconstruct images from speckle patterns generated as objects pass through scattering media. To achieve this objective, a large amount of data is collected for training the CNN. However, in certain cases, the characteristics of light passing through the scattering medium may vary. In such situations, it is necessary to collect a substantial amount of new data to re-train the CNN and achieve image reconstruction. To address this challenge, transfer learning techniques are introduced in this study. Specifically, we propose a novel Residual U-Net Generative Adversarial Network, denoted as ResU-GAN. The network is initially pre-trained using a large amount of data collected from either visible or non-visible light sources, and subsequently fine-tuned using a small amount of data collected from non-visible or visible light sources. Experimental results demonstrate the outstanding reconstruction performance of the ResU-GAN network. Furthermore, by combining transfer learning techniques, the network enables the reconstruction of speckle images across different datasets. The findings presented in this paper provide a more generalized approach for utilizing CNNs in cross-spectral speckle imaging.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.