Abstract

Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as "label-free prediction" and "in-silico labeling") can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.

Highlights

  • Nanomedicine uptake and effect on fat cells can be explored using microscopy imaging techniques applied to stem cell derived cell cultures

  • To the best of our knowledge, the work in this paper presents the first application of Learning Using Privileged Information (LUPI) in virtual staining for image cytometry

  • In this work we carefully considered the challenges posed by each imaging channel to tailor our solutions, whilst fine-tuning the model selection based on the ablation analysis

Read more

Summary

Introduction

Nanomedicine uptake and effect on fat cells (adipocytes) can be explored using microscopy imaging techniques applied to stem cell derived cell cultures. The “in-silico labeling” approach used by [10] is a noteworthy example of the latter They proposed a U-Net deep learning architecture [11] and inception-inspired modules [12] to generate fluorescence images given transmitted-light images as input. The conditional discriminator sees both the input to the generator and the fake or real outputs This can help alleviate the problem of artifacts that GANs can produce and was used in, for instance, Biologically relevant virtual staining for adipocyte cell images the Pix2Pix algorithm [16]. We propose a method to generate fluorescent labels for adipocyte cell images directly from bright-field z-stacks. This is done by constructing three different models, one each for nuclei, cytoplasm, and lipid droplets. A GitHub repository providing the code base for our modelling solutions is available at https://github.com/aktgpt/brevis

Data and evaluation
Data preprocessing
Image based evaluation
CellProfiler evaluation
Base model
Nuclei model
Lipid model
Cytoplasm model
Pyramidal weighted inference
Ablation analysis
Final evaluation
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call