Abstract

High quality medical images play an important role in intelligent medical analyses. However, the difficulty of acquiring medical images with professional annotation makes the required medical image datasets, very expensive and time-consuming. In this paper, we propose a semi-supervised method, $ {\mathrm{C}\mathrm{A}\mathrm{U}}^{+} $, which is a consensus model of augmented unlabeled data for cardiac image segmentation. First, the whole is divided into two parts: the segmentation network and the discriminator network. The segmentation network is based on the teacher student model. A labeled image is sent to the student model, while an unlabeled image is processed by CTAugment. The strongly augmented samples are sent to the student model and the weakly augmented samples are sent to the teacher model. Second, $ {\mathrm{C}\mathrm{A}\mathrm{U}}^{+} $ adopts a hybrid loss function, which mixes the supervised loss for labeled data with the unsupervised loss for unlabeled data. Third, an adversarial learning is introduced to facilitate the semi-supervised learning of unlabeled images by using the confidence map generated by the discriminator as a supervised signal. After evaluating on an automated cardiac diagnosis challenge (ACDC), our proposed method $ {\mathrm{C}\mathrm{A}\mathrm{U}}^{+} $ has good effectiveness and generality and $ {\mathrm{C}\mathrm{A}\mathrm{U}}^{+} $ is confirmed to have a improves dice coefficient (DSC) by up to 18.01, Jaccard coefficient (JC) by up to 16.72, relative absolute volume difference (RAVD) by up to 0.8, average surface distance (ASD) and 95% Hausdorff distance ($ {HD}_{95} $) reduced by over 50% than the latest semi-supervised learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call