Abstract

Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.

Highlights

  • High-throughput imaging of 3D samples is of significant importance for numerous fields

  • Recurrent-MZ based volumetric imaging of C. elegans samples A Recurrent-MZ network was trained and validated using C. elegans samples, and blindly tested on new specimens that were not part of the training/validation dataset. This trained Recurrent-MZ was used to reconstruct C. elegans samples with high fidelity over an extended axial range of 18 μm based on three 2D input images that were captured with an axial spacing of Δz = 6 μm; these three 2D images were fed into RecurrentMZ in groups of two, i.e., M = 2 (Fig. 2)

  • We demonstrated a new deep learning-based volumetric imaging framework termed Recurrent-MZ enabled by a convolutional recurrent neural network, which significantly extends the DOF of the microscopy system from sparse 2D scanning, providing a 30-fold reduction in the number of required mechanical scans

Read more

Summary

Introduction

High-throughput imaging of 3D samples is of significant importance for numerous fields. Different imaging methods have been proposed to improve the throughput of scanning-based 3D microscopy techniques, such as multifocal imaging[8,9,10,11,12,13], lightfield microscopy[14,15], microscopy with engineered point spread functions (PSFs)[16,17,18] and compressive sensing[19,20,21] These solutions introduce trade-offs, either by complicating the microscope system design, compromising the image quality and/or resolution or prolonging the image post-processing time. Some of these limitations and performance trade-offs have partially restricted the wide-scale applicability of these computational methods for 3D microscopy

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call