Abstract

Medical image fusion aims to integrate the complementary information captured by images of different modalities into a more informative composite image. However, current study on medical image fusion suffers from several drawbacks: 1) Existing methods are mostly designed for 2-D slice fusion, and they tend to lose spatial contextual information when fusing medical images with volumetric structure slice by slice individually. 2) The few existing 3-D medical image fusion methods fail in considering the characteristics of source modalities sufficiently, leading to the loss of important modality information. 3) Most existing works concentrate on pursuing good performance on visual perception and objective evaluation, while there is a severe lack of clinical problem-oriented study. In this paper, to address these issues, we propose a multimodal MRI volumetric data fusion method based on an end-to-end convolutional neural network (CNN). In our network, an attention-based multimodal feature fusion (MMFF) module is presented for more effective feature learning. In addition, a specific loss function that considers the characteristics of different MRI modalities is designed to preserve the modality information. Experimental results demonstrate that the proposed method can obtain more competitive results on both visual quality and objective assessment, when compared with some representative 3-D and 2-D medical image fusion methods. We further verify the significance of the proposed method for brain tumor segmentation by enriching the input modalities, and the results show that it is helpful to improve the segmentation accuracy. The source code of our fusion method is available at https://github.com/yuliu316316/3D-CNN-Fusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call