Abstract

Background and Objective:Semi-supervised learning for medical image segmentation is an important area of research for alleviating the huge cost associated with the construction of reliable large-scale annotations in the medical domain. Recent semi-supervised approaches have demonstrated promising results by employing consistency regularization, pseudo-labeling techniques, and adversarial learning. These methods primarily attempt to learn the distribution of labeled and unlabeled data by enforcing consistency in the predictions or embedding context. However, previous approaches have focused only on local discrepancy minimization or context relations across single classes. Methods:In this paper, we introduce a novel adversarial learning-based semi-supervised segmentation method that effectively embeds both local and global features from multiple hidden layers and learns context relations between multiple classes. Our voxel-wise adversarial learning method utilizes a voxel-wise feature discriminator, which considers multilayer voxel-wise features (involving both local and global features) as an input by embedding class-specific voxel-wise feature distribution. Furthermore, our previous representation learning method is improved by overcoming information loss and learning stability problems, which enables rich representations of labeled data. Result:In the experiments, we used the Left Atrial Segmentation Challenge dataset and the Abdominal Multi-Organ dataset to prove the effectiveness of our method in both single class and multiclass segmentation. The experimental results demonstrate that our method outperforms current best-performing state-of-the-art semi-supervised learning approaches. Our proposed adversarial learning-based semi-supervised segmentation method successfully leveraged unlabeled data to improve the network performance by 2% in Dice score coefficient for multi-organ dataset. Conclusion:We compare our approach to a wide range of medical datasets, and showed our method can be adapted to embed class-specific features. Furthermore, visual interpretation of the feature space demonstrates that our proposed method enables a well-distributed and separated feature space from both labeled and unlabeled data, which improves the overall prediction results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.