Abstract

MRI brain image analysis, including brain tumor detection, is a challenging task. MRI images are multimodal, and in recent years, multimodal medical image analysis has gotten more attention. Modes refer to data from multiple sources which are semantically correlated and sometimes provide complementary information to each other. In this paper, modalities in MRI brain images refer to the different planes of view (axial, sagittal, and coronal planes) in which MRI images are taken. In most cases, in the literature on multimodal data analysis, it is assumed that all modalities for all samples are available. While, in medical image analysis, this assumption is not valid, and only some modalities might be available for each sample. The knowledge transfer between and within modalities is considered to tackle this challenge in MRI brain image segmentation. For knowledge transfer, domain adaptation is an important step that deals with the problem of having different distributions between the training and test sets. These challenges have not been considered in recent multimodal brain image analysis studies. This paper proposed a new multimodal deep transfer learning for MRI brain image analysis. The main differences of the proposed approach, with respect to the other multimodal brain image analysis, are 1) proposing a new multimodal feature encoder and 2) proposing a new multimodal adaptation technique to handle the different distribution between the training and test sets. We applied it to IBSR and Figshre brain tumor datasets to evaluate the proposed approach. The results confirm that the proposed approach significantly outperforms the other comparable approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call