Abstract

Segmenting cell nuclei within microscopy images is a ubiquitous task in biological research and clinical applications. Unfortunately, segmenting low-contrast overlapping objects that may be tightly packed is a major bottleneck in standard deep learning-based models. We report a Nuclear Segmentation Tool (NuSeT) based on deep learning that accurately segments nuclei across multiple types of fluorescence imaging data. Using a hybrid network consisting of U-Net and Region Proposal Networks (RPN), followed by a watershed step, we have achieved superior performance in detecting and delineating nuclear boundaries in 2D and 3D images of varying complexities. By using foreground normalization and additional training on synthetic images containing non-cellular artifacts, NuSeT improves nuclear detection and reduces false positives. NuSeT addresses common challenges in nuclear segmentation such as variability in nuclear signal and shape, limited training sample size, and sample preparation artifacts. Compared to other segmentation models, NuSeT consistently fares better in generating accurate segmentation masks and assigning boundaries for touching nuclei.

Highlights

  • Quantitative single-cell analysis can reveal novel molecular details of cellular processes relevant to basic research, drug discovery, and clinical diagnostics

  • At the pixel-level, the segmentation task of Mask R-Convolutional neural networks (CNN) is performed by Fully Convolutional Network (FCN), which is less accurate with small training datasets compared with U-Net.[15,30]

  • To improve segmentation accuracy in images with large nuclear size variations, we modified the original Region Proposal Networks (RPN) architecture to use bounding box dimensions based on average nuclear size for each image (S2 Fig)

Read more

Summary

Introduction

Quantitative single-cell analysis can reveal novel molecular details of cellular processes relevant to basic research, drug discovery, and clinical diagnostics. The main goal is to make reliable statements about cells as a whole (e.g. the number of cells, their average size and shape, detection of rare/unusual cells) rather than focusing on image pixels For such problems, the idea of instance segmentation provides a more effective solution, as the loss function incorporates a sense of the whole object and not just individual pixels. A recent improvement is to incorporate a Faster R-CNN detection module In this approach, the algorithm computes object locations and uses them as markers for the watershed layer, improving the segmentation.[26] Another approach, Mask R-CNN[19], applies FCNbased segmentation to regions proposed by Region Proposal Networks (RPN) and achieves good segmentation results in real-world image datasets. Mask R-CNN employs fixed anchor scales for bounding boxes across all images, which is a limitation for samples with variable-sized nuclei.[18,19] at the pixel-level, the segmentation task of Mask R-CNN is performed by FCN, which is less accurate with small training datasets compared with U-Net.[15,30]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call