Abstract

Histopathological images reveal a lot of information about tissue health. The task of nucleus segmentation can be applied to more diverse studies. The potential of deep learning has been demonstrated for the task of nuclei segmentation. However, the cost and time required for labeling data, as well as the lack of annotators, makes it difficult to keep up with the continuous generation of new pathological images. This paper proposes a method that combines attention mechanisms and self-supervised learning to train a model for nuclei segmentation on unlabeled data. The method is mainly divided into two stages. After the initial preprocessing of the image, the enhanced image of the nucleus area can be obtained in the first stage. Subsequently, the original image and the augmented image needed for self-supervised learning undergo feature extraction in the second stage. The results of this study show that the proposed method can achieve dice scores exceeding 0.7 in the nucleus binary segmentation task without labels. Fine-tuning the model for downstream tasks can also achieve the effect of F1 score 0.7 multi-category segmentation. Using the same experimental setting, the method proposed in this study achieves a dice score of 0.779 on the MoNuSeg dataset, which is only 0.03 lower than the highest dice score of 0.809 achieved by the compared supervised learning methods. In comparison to other unsupervised models, this method demonstrates superior segmentation performance. Moreover, this study applies the training weights to other histopathology image datasets, enabling the generation of the same binary segmentation effect.© 2017 Elsevier Inc. All rights reserved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.