Abstract

Generative adversarial networks (GANs) have recently been successfully used to create realistic synthetic microscopy cell images in 2D and predict intermediate cell stages. In the current paper we highlight that GANs can not only be used for creating synthetic cell images optimized for different fluorescent molecular labels, but that by using GANs for augmentation of training data involving scaling or other transformations the inherent length scale of biological structures is retained. In addition, GANs make it possible to create synthetic cells with specific shape features, which can be used, for example, to validate different methods for feature extraction. Here, we apply GANs to create 2D distributions of fluorescent markers for F-actin in the cell cortex of Dictyostelium cells (ABD), a membrane receptor (cAR1), and a cortex-membrane linker protein (TalA). The recent more widespread use of 3D lightsheet microscopy, where obtaining sufficient training data is considerably more difficult than in 2D, creates significant demand for novel approaches to data augmentation. We show that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network. We demonstrate that in the case of molecular labels that are highly correlated with cell shape, like F-actin in our example, 2D GANs can be used efficiently to create pseudo-3D synthetic cell data from individually generated 2D slices. Because high quality segmented 2D cell data are more readily available, this is an attractive alternative to using less efficient 3D networks.

Highlights

  • The rapid development of imaging technologies in life sciences is causing a surge in high resolution 2D and 3D microscopic images of cells, mostly employing fluorescent molecular labels

  • We investigated three separate conditional Generative adversarial networks (GANs) networks based on the architecture proposed in Isola et al (2017) that were trained with cell images of Dictyostelium cells labeled by three different fluorescent markers: (a) a marker for the F-actin cytoskeleton (ABDGFP) which is important in driving cellular shape changes, (b) a membrane receptor for the chemoattractant cAMP which in Dictyostelium controls directed cell motility and development, and (c) a protein that links the cell membrane and the F-actin cell cortex (TalA-GFP or TalA-mNeon), important for cell motility and cellular shape changes

  • We investigated two architectures of generative adversarial networks applied for synthesizing 2D/3D images of single cells from their segmented counterparts

Read more

Summary

Introduction

The rapid development of imaging technologies in life sciences is causing a surge in high resolution 2D and 3D microscopic images of cells, mostly employing fluorescent molecular labels. What follows is a demand for accessible and significant collections of biological image data that can be used for either testing existing solutions for feature extraction and cell classification, or developing new ones. Two prominent examples of applications that require large datasets are (1) assessing the quality of 2D/3D segmentation algorithms and (2) training of artificial deep neural networks. The first case requires manual or guided segmentation to generate ground truth data, which is costly and laborious, and in 3D often prohibitive. The second case naturally benefits from augmented collections of diverse training data. As we demonstrate here, traditional image transformations used in data augmentation can produce biologically improper images

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call