Abstract

Fluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study molecular states in complex cellular environment as the lifetime readings are not biased by fluorophore concentration or excitation power. However, the current methods to generate FLIM images are either computationally intensive or unreliable when the number of photons acquired at each pixel is low. Here we introduce a new deep learning-based method termed flimGANE (fluorescence lifetime imaging based on Generative Adversarial Network Estimation) that can rapidly generate accurate and high-quality FLIM images even in the photon-starved conditions. We demonstrated our model is up to 2,800 times faster than the gold standard time-domain maximum likelihood estimation (TD_MLE) and that flimGANE provides a more accurate analysis of low-photon-count histograms in barcode identification, cellular structure visualization, Förster resonance energy transfer characterization, and metabolic state analysis in live cells. With its advantages in speed and reliability, flimGANE is particularly useful in fundamental biological research and clinical applications, where high-speed analysis is critical.

Highlights

  • Fluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study molecular states in complex cellular environment as the lifetime readings are not biased by fluorophore concentration or excitation power

  • Have been developed to infer the lifetime of interest. These methods are often limited by long computation times, poor accuracy in the low-light conditions, and invalid initial assumptions of decay parameters. Deep learning methods, such as artificial neural network (ANN)[25] or convolutional neural network (CNN)[26,27], have been employed to achieve rapid fluorescence lifetime analysis in the medium-photon-count conditions (200–500 photon counts per pixel), other deep learning algorithms may further improve the reliability in analyzing the low-photon-count (100–200 photon counts per pixel) or even ultralow-photon-count data (50–100 photon counts per pixel) for live-cell imaging

  • Our flimGANE method is adapted from the Wasserstein GAN algorithm[34] (WGAN), where the generator (G) is trained to produce an “artificial” high-photon-count fluorescence decay histogram based on a low-photon-count input, while the discriminator (D) distinguishes the artificial decay histogram from the ground truth

Read more

Summary

Introduction

Fluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study molecular states in complex cellular environment as the lifetime readings are not biased by fluorophore concentration or excitation power. We introduce a new deep learning-based method termed flimGANE (fluorescence lifetime imaging based on Generative Adversarial Network Estimation) that can rapidly generate accurate and high-quality FLIM images even in the photon-starved conditions. We demonstrated our model is up to 2,800 times faster than the gold standard time-domain maximum likelihood estimation (TD_MLE) and that flimGANE provides a more accurate analysis of low-photon-count histograms in barcode identification, cellular structure visualization, Förster resonance energy transfer characterization, and metabolic state analysis in live cells. We demonstrate a new fluorescence lifetime imaging method based on Generative Adversarial Network Estimation (flimGANE) that can provide fast, fit-free, accurate, and highquality FLIM images even under the photon-starved conditions (50–200 photon counts per pixel). FPGA-MLE needs much more effort in hardware development and programing to be implemented in an existing optical system

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call