Abstract

Nuclei segmentation is a fundamental but challenging task in histopathological image analysis. One of the main problems is the existence of overlapping regions which increases the difficulty of independent nuclei separation. In this study, to solve the segmentation of nuclei and overlapping regions, we introduce a nuclei segmentation method based on two-stage learning framework consisting of two connected Stacked U-Nets (SUNets). The proposed SUNets consists of four parallel backbone nets, which are merged by the attention generation model. In the first stage, a Stacked U-Net is utilized to predict pixel-wise segmentation of nuclei. The output binary map together with RGB values of the original images are concatenated as the input of the second stage of SUNets. Due to the sizable imbalance of overlapping and background regions, the first network is trained with cross-entropy loss, while the second network is trained with focal loss. We applied the method on two publicly available datasets and achieved state-of-the-art performance for nuclei segmentation–mean Aggregated Jaccard Index (AJI) results were 0.5965 and 0.6210, and F1 scores were 0.8247 and 0.8060, respectively; our method also segmented the overlapping regions between nuclei, with average AJI = 0.3254. The proposed two-stage learning framework outperforms many current segmentation methods, and the consistent good segmentation performance on images from different organs indicates the generalized adaptability of our approach.

Highlights

  • Morphological changes in the cell nucleus are considered an important signal in many diseases (Gurcan et al, 2009) and can provide clinically meaningful information during diagnosis, especially for cancers (Chow et al, 2015)

  • U-Net (Ronneberger et al, 2015) is a classical architecture based on fully convolutional network (FCN) (Long et al, 2015), which has been widely used and has obtained promising performance when applied to the task of image segmentation (Litjens et al, 2017; Kong et al, 2020)

  • We evaluated our method by utilizing two publicly available datasets sourced from the Cancer Genome Atlas (TCGA)1 (Kumar et al, 2017) and the Triple-Negative Breast Cancer (TNBC) (Naylor et al, 2017)

Read more

Summary

Introduction

Morphological changes in the cell nucleus are considered an important signal in many diseases (Gurcan et al, 2009) and can provide clinically meaningful information during diagnosis, especially for cancers (Chow et al, 2015). The conventional method involves manual inspection and analyses performed by pathologists to make diagnostic assessments based on certain morphology features of the nucleus. This manual assessment is a tedious and time-consuming task that can be beset by shortcomings such as poor sensitivity, specificity, and low reproducibility. U-Net (Ronneberger et al, 2015) is a classical architecture based on fully convolutional network (FCN) (Long et al, 2015), which has been widely used and has obtained promising performance when applied to the task of image segmentation (Litjens et al, 2017; Kong et al, 2020). On the other hand, Stacked U-Nets (SUNets) (Shah et al, 2018) can be considered as further improvement as they iteratively combine features from different image scales while maintaining resolution. Leveraging the feature computation power of U-Nets in a deeper network architecture, SUNets are capable of handling images with increased complexity

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call