Abstract

Advances in the artificial neural network have made machine learning techniques increasingly more important in image analysis tasks. Recently, convolutional neural networks (CNN) have been applied to the problem of cell segmentation from microscopy images. However, previous methods used a supervised training paradigm in order to create an accurate segmentation model. This strategy requires a large amount of manually labeled cellular images, in which accurate segmentations at pixel level were produced by human operators. Generating training data is expensive and a major hindrance in the wider adoption of machine learning based methods for cell segmentation. Here we present an alternative strategy that trains CNNs without any human-labeled data. We show that our method is able to produce accurate segmentation models, and is applicable to both fluorescence and bright-field images, and requires little to no prior knowledge of the signal characteristics.

Highlights

  • Advances in the artificial neural network have made machine learning techniques increasingly more important in image analysis tasks

  • To train a convolutional neural networks (CNN) for the segmentation task, one typically needs a significant amount of manually labeled training images, in which cell areas and/or cell boundaries are marked by human operators

  • The segmentation model had additional segmentation error at pixel level, which we evaluated by comparing the single cell segmentation results with manual segmentations on selected cells

Read more

Summary

Introduction

Advances in the artificial neural network have made machine learning techniques increasingly more important in image analysis tasks. We designed our neural network to perform segmentation on a smaller patch of the input image centered on the marker positions (Fig. 1) This converts the multi-cell segmentation problem to multiple single-cell segmentation problems, which in turn removes the under-segmentation bias as long as the nuclei markers were correctly computed. We will demonstrate an alternative approach, in which we generate synthetic “nucleus images” from the normal whole cell images, by using a pretrained CNN model This technique is similar to the method first demonstrated by Ounkomol et.al[26], in which they showed that CNNs can be trained to map one image modality (e.g. bright-field image) to a different one (e.g., fluorescence images of plasma membrane, nucleus etc.)

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.