Deep learning technology has been widely used in brain tumor segmentation with multi-modality magnetic resonance imaging, helping doctors achieve faster and more accurate diagnoses. Previous studies have demonstrated that the weighted fusion segmentation method effectively extracts modality importance, laying a solid foundation for multi-modality magnetic resonance imaging segmentation. However, the challenge of fusing multi-modality features with single-modality features remains unresolved, which motivated us to explore an effective fusion solution. We propose a multi-modality and single-modality feature recalibration network for magnetic resonance imaging brain tumor segmentation. Specifically, we designed a dual recalibration module that achieves accurate feature calibration by integrating the complementary features of multi-modality with the specific features of a single modality. Experimental results on the BraTS 2018 dataset showed that the proposed method outperformed existing multi-modal network methods across multiple evaluation metrics, with spatial recalibration significantly improving the results, including Dice score increases of 1.7%, 0.5%, and 1.6% for the enhanced tumor core, whole tumor, and tumor core regions, respectively.
Read full abstract