Abstract

In neonatal brain magnetic resonance image (MRI) segmentation, the model we trained on the training set (source domain) often performs poorly in clinical practice (target domain). As the label of target-domain images is unavailable, this cross-domain segmentation needs unsupervised domain adaptation (UDA) to make the model adapt to the target domain. However, the shape and intensity distribution of neonatal brain MRI images across the domains are largely different from adults'. Current UDA methods aim to make synthesized images similar to the target domain as a whole. But it is impossible to synthesize images with intraclass similarity because of the regional misalignment caused by the cross-domain difference. This will result in generating intraclassly incorrect intensity information from target-domain images. To address this issue, we propose an IAS-NET (joint intraclassly adaptive generative adversarial network (GAN) (IA-NET) and segmentation) framework to bridge the gap between the two domains for intraclassalignment. Our proposed IAS-NET is an elegant learning framework that transfers the appearance of images across the domains from both image and feature perspectives. It consists of the proposed IA-NET and a segmentation network (S-NET). The proposed IA-NET is a GAN-based adaptive network that contains one generator (including two encoders and one shared decoder) and four discriminators for cross-domain transfer. The two encoders are implemented to extract original image, mean, and variance features from source and target domains. The proposed local adaptive instance normalization algorithm is used to perform intraclass feature alignment to the target domain in the feature-map level. S-NET is a U-net structure network that is used to provide semantic constraint by a segmentation loss for the training of IA-NET. Meanwhile, it offers pseudo-label images for calculating intraclass features of the target domain. Source code (in Tensorflow) is available at https://github.com/lb-whu/RAS-NET/. Extensive experiments are carried out on two different data sets (NeoBrainS12 and dHCP), respectively. There exist great differences in the shape, size, and intensity distribution of magnetic resonance (MR) images in the two databases. Compared to baseline, we improve the average dice score of all tissues on NeoBrains12 by 6% through adaptive training with unlabeled dHCP images. Besides, we also conduct experiments on dHCP and improved the average dice score by 4%. The quantitative analysis of the mean and variance of the synthesized images shows that the synthesized image by the proposed is closer to the target domain both in the full brain or within each class than that of the comparedmethods. In this paper, the proposed IAS-NET can improve the performance of the S-NET effectively by its intraclass feature alignment in the target domain. Compared to the current UDA methods, the synthesized images by IAS-NET are more intraclassly similar to the target domain for neonatal brain MR images. Therefore, it achieves state-of-the-art results in the compared UDA models for the segmentation task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call