Abstract

Deep learning has been broadly applied to imaging in scattering applications. A common framework is to train a descattering network for image recovery by removing scattering artifacts. To achieve the best results on a broad spectrum of scattering conditions, individual “expert” networks need to be trained for each condition. However, the expert’s performance sharply degrades when the testing condition differs from the training. An alternative brute-force approach is to train a “generalist” network using data from diverse scattering conditions. It generally requires a larger network to encapsulate the diversity in the data and a sufficiently large training set to avoid overfitting. Here, we propose an adaptive learning framework, termed dynamic synthesis network (DSN), which dynamically adjusts the model weights and adapts to different scattering conditions. The adaptability is achieved by a novel “mixture of experts” architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network. We demonstrate the DSN in holographic 3D particle imaging for a variety of scattering conditions. We show in simulation that our DSN provides generalization across a continuum of scattering conditions. In addition, we show that by training the DSN entirely on simulated data, the network can generalize to experiments and achieve robust 3D descattering. We expect the same concept can find many other applications, such as denoising and imaging in scattering media. Broadly, our dynamic synthesis framework opens up a new paradigm for designing highly adaptive deep learning and computational imaging techniques.

Highlights

  • We show that the dynamic synthesis network (DSN) can adaptively remove scattering artifacts even if the scattering condition has never been “seen” during the training, and its performance is comparable to, if not even better than, the expert network separately trained at the matching scattering condition

  • Our dynamic synthesis framework opens up a new paradigm for designing highly adaptive Deep learning (DL)-based computational imaging techniques

  • The unique properties of the DSN include its dynamically synthesized feature representations of the input and the adaptively tuned network parameters, both of which are adjusted “on-the-fly” at each inference time to achieve adaptation. This is in stark contrast to conventional deep neural network (DNN), which perform direct inference with pretrained network parameters

Read more

Summary

Introduction

Deep learning (DL) has become a powerful technique for tackling complex yet important computational imaging problems[1], such as phase imaging[2,3,4,5], tomography[6,7,8,9], ghost imaging[10,11,12,13], lightfield microscopy[14,15], super-resolution imaging[16,17,18], enhancing digital holography[19,20,21,22], and imaging through scattering media[23,24,25,26]. Within these computational imaging applications, one of the prevalent problems is “descattering”, or removing scattering artifacts For this purpose, a deep neural network (DNN) is generally trained to perform descattering, either directly on the measurement[2,3,4,10,13,19,20,23,25] or on the object-space projection[11,12].

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call