Abstract

Cell shapes provide crucial biological information on complex tissues. Different cell types often have distinct cell shapes, and collective shape changes usually indicate morphogenetic events and mechanisms. The identification and detection of collective cell shape changes in an extensive collection of 3D time-lapse images of complex tissues is an important step in assaying such mechanisms but is a tedious and time-consuming task. Machine learning provides new opportunities to automatically detect cell shape changes. However, it is challenging to generate sufficient training samples for pattern identification through deep learning because of a limited amount of images and annotations. We present a deep learning approach with minimal well-annotated training samples and apply it to identify multicellular rosettes from 3D live images of the Caenorhabditis elegans embryo with fluorescently labeled cell membranes. Our strategy is to combine two approaches, namely, feature transfer and generative adversarial networks (GANs), to boost image classification with small training samples. Specifically, we use a GAN framework and conduct an unsupervised training to capture the general characteristics of cell membrane images with 11,250 unlabelled images. We then transfer the structure of the GAN discriminator into a new Alex-style neural network for further learning with several dozen labeled samples. Our experiments showed that with 10-15 well-labeled rosette images and 30-40 randomly selected nonrosette images our approach can identify rosettes with more than 80% accuracy and capture more than 90% of the model accuracy achieved with a training data et that is five times larger. We also established a public benchmark dataset for rosette detection. This GAN-based transfer approach can be applied to the study of other cellular structures with minimal training samples.

Highlights

  • Live microscopy and image processing are commonly used to investigate cellular dynamics, quantify cellular behaviors, and support simulation-based hypothesis testing

  • GENERATIVE ADVERSARIAL NETWORKS We present a way to use a sizable unlabeled dataset and transfer learning techniques to improve the training of the convolutional neural networks (CNNs)

  • generative adversarial networks (GANs)-BASED CLASSIFIER CAPTURES KEY FEATURES WITH SMALL DATA SAMPLES When using a small training dataset, it is known that a conventional classifier can quickly run into a data overfitting problem [24]

Read more

Summary

Introduction

Live microscopy and image processing are commonly used to investigate cellular dynamics, quantify cellular behaviors, and support simulation-based hypothesis testing. The huge amount of microscopic data generated during such studies presents unprecedented challenges for human-based, interactive data analysis. Advanced computing technology has been used in microscopic data analysis [10]; The associate editor coordinating the review of this manuscript and approving it for publication was Po Yang. The majority of these efforts require deep domain knowledge through a label-intensive annotation process. Artificial intelligence–based computer vision provides a ‘‘modelfree’’ approach to solving generic data problems, such as object identification. Convolutional neural networks (CNNs) are widely adopted for object classification and identification [11], [21], [23]. Some well-known CNNs usually contain a large number of parameters (e.g., more than 25 million in a ResNet-50 network), which require large well-labeled training datasets. Considering funding limitations and the scarcity of domain experts, it is still quite

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.