Abstract

Recent medical image segmentation methods heavily rely on large-scale training data and high-quality annotations. However, these resources are hard to obtain due to the limitation of medical images and professional annotators. How to utilize limited annotations and maintain the performance is an essential yet challenging problem. In this paper, we try to tackle this problem in a self-learning manner by proposing a generative adversarial semi-supervised network. We use limited annotated images as main supervision signals, and the unlabeled images are manipulated as extra auxiliary information to improve the performance. More specifically, we modulate a segmentation network as a generator to produce pseudo labels for unlabeled images. To make the generator robust, we train an uncertainty discriminator with generative adversarial learning to determine the reliability of the pseudo labels. To further ensure dependability, we apply feature mapping loss to obtain statistic distribution consistency between the generated labels and the real labels. Then the verified pseudo labels are used to optimize the generator in a self-learning manner. We validate the effectiveness of the proposed method on right ventricle dataset, Sunnybrook dataset, STACOM, ISIC dataset, and Kaggle lung dataset. We obtain 0.8402–0.9121, 0.8103–0.9094, 0.9435–0.9724, 0.8635–0.886, and 0.9697–0.9885 dice coefficient with 1/8 to 1/2 proportion of densely annotated labels, respectively. The improvements are up to 28.6 points higher than the corresponding fully supervised baseline.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.