Abstract

Unsupervised domain adaptation multi-modal medical image segmentation method is used for joint training to realize the segmentation of different modal medical images at the same time. Since the domain shift of different modal images and the limited labeled medical images, the accuracy of these methods needs to be further improved. In this work, we present a novel unsupervised domain adaptation method, named as Dual Attention-guided and Learnable spatial transformation data Augmentation multi-modal unsupervised medical image segmentation (DALA). Firstly, this paper mainly introduces the position and channel Dual Attention Mechanism (Dual Attent-M) into the low-level encoder to improve the feature extraction ability of the network and enhance the domain adaptation training of the network. Secondly, a learnable Spatial Transformation data Augmentation method (Spatial Tran-Aug) is further proposed to learn the spatial mapping relationship between the source image and the target image to synthesize high-quality data for training. Experiments on the Multi-Modality Whole Heart Segmentation (MMWHS) dataset show that compared with the multi-modal segmentation methods such as PnP-AdaNet, SynSeg-Net, AdaOutput, CyCADA, Prior SIF and SIFA, the proposed method DALA can achieve better segmentation results, and the average DICE predicted by CT and MR is increased to 78.2% and 67.9%, the mean ASSD decreased to 4.4 and 4.7.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call