Abstract

Accurate segmentation of brain tumors plays an important role for clinical diagnosis and treatment. Multimodal magnetic resonance imaging (MRI) can provide rich and complementary information for accurate brain tumor segmentation. However, some modalities may be absent in clinical practice. It is still challenging to integrate the incomplete multimodal MRI data for accurate segmentation of brain tumors. In this paper, we propose a brain tumor segmentation method based on multimodal transformer network with incomplete multimodal MRI data. The network is based on U-Net architecture consisting of modality specific encoders, multimodal transformer and multimodal shared-weight decoder. First, a convolutional encoder is built to extract the specific features of each modality. Then, a multimodal transformer is proposed to model the correlations of multimodal features and learn the features of missing modalities. Finally, a multimodal shared-weight decoder is proposed to progressively aggregate the multimodal and multi-level features with spatial and channel self-attention modules for brain tumor segmentation. A missing-full complementary learning strategy is used to explore the latent correlation between the missing and full modalities for feature compensation. For evaluation, our method is tested on the multimodal MRI data from BraTS 2018, BraTS 2019 and BraTS 2020 datasets. The extensive results demonstrate that our method outperforms the state-of-the-art methods for brain tumor segmentation on most subsets of missing modalities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call