Abstract

Existing medical image segmentation models tend to achieve satisfactory performance when the training and test data are drawn from the same distribution, while they often produce significant performance degradation when used for the evaluation of cross-modality data. To facilitate the deployment of deep learning models in real-world medical scenarios and to mitigate the performance degradation caused by domain shift, we propose an unsupervised cross-modality segmentation framework based on representation disentanglement and image-to-image translation. Our approach is based on a multimodal image translation framework, which assumes that the latent space of images can be decomposed into a content space and a style space. First, image representations are decomposed into the content and style codes by the encoders and recombined to generate cross-modality images. Second, we propose content and style reconstruction losses to preserve consistent semantic information from original images and construct content discriminators to match the content distributions between source and target domains. Synthetic images with target domain style and source domain anatomical structures are then utilized for training of the segmentation model. We applied our framework to the bidirectional adaptation experiments on MRI and CT images of abdominal organs. Compared to the case without adaptation, the Dice similarity coefficient (DSC) increased by almost 30 and 25% and average symmetric surface distance (ASSD) dropped by 13.3 and 12.2, respectively. The proposed unsupervised domain adaptation framework can effectively improve the performance of cross-modality segmentation, and minimize the negative impact of domain shift. Furthermore, the translated image retains semantic information and anatomical structure. Our method significantly outperforms several competing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call