Abstract

In recent years, medical image segmentation (MIS) has made a huge breakthrough due to the success of deep learning. However, the existing MIS algorithms still suffer from two types of uncertainties: (1) the uncertainty of the plausible segmentation hypotheses and (2) the uncertainty of segmentation performance. These two types of uncertainties affect the effectiveness of the MIS algorithm and then affect the reliability of medical diagnosis. Many studies have been done on the former but ignore the latter. Therefore, we proposed the hierarchical predictable segmentation network (HPS-Net), which consists of a new network structure, a new loss function, and a cooperative training mode. According to our knowledge, HPS-Net is the first network in the MIS area that can generate both the diverse segmentation hypotheses to avoid the uncertainty of the plausible segmentation hypotheses and the measure predictions about these hypotheses to avoid the uncertainty of segmentation performance. Extensive experiments were conducted on the LIDC-IDRI dataset and the ISIC2018 dataset. The results show that HPS-Net has the highest Dice score compared with the benchmark methods, which means it has the best segmentation performance. The results also confirmed that the proposed HPS-Net can effectively predict TNR and TPR.

Highlights

  • With the development of medical imaging modalities such as ultrasound, X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), much attention is attracted to developing new medical image-processing (MIP) techniques [1]

  • Our proposed HPS-NET is derived from probabilistic hierarchical segmentation (PHiSeg), while PHiSeg is derived from the probabilistic U-Net, where U-Net is a milestone in the field of medical image segmentation and is the derivative source of HPS-NET, PHiSeg, and the probabilistic U-Net

  • Where Lms represents the measure loss, l is the number of latent levels, Li represents the sub-loss of the ith latent level, N is the batch size, Mn is the predicted measurement value of the nth sample, Un represents the upper bound of the ground truth measurement value of the nth segmentation hypothesis, Ln represents the lower bound of the ground truth measurement value of the nth segmentation hypothesis, Sn denotes the nth segmentation hypothesis, m is the number of ground truth, Snj denotes the jth ground truth of the nth segmentation hypothesis, and M(·, ·) calculates the measurement value, which can be true positive rate (TPR), true negative rate (TNR), precision, and others

Read more

Summary

Introduction

With the development of medical imaging modalities such as ultrasound, X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), much attention is attracted to developing new medical image-processing (MIP) techniques [1]. To the best of our knowledge, there exists no other work that has considered providing the corresponding measure predictions along with each segmentation hypothesis; and there exists no other work that has considered generating segmentation hypotheses based on specified measurement values To fill these two gaps, we propose a hierarchical predictable segmentation network (HPS-Net). HPS-Net can learn a complex probability distribution of the samples in the latent space and has the ability to predict the measurement values; it can generate an unlimited number of segmentation hypotheses along with their measure predictions. The. Symmetry 2021, 13, 2107 results show that the proposed HPS-Net can provide effective measure predictions while keeping the maximum segmentation performance in terms of segmentation accuracies of multiple annotations and a single annotation per image.

Related Work
Probabilistic U-Net
Loss Functions
Training Procedure
Experiments and Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.