Abstract

Gliomas are the most common intracranial primary tumors. Accurate grading of gliomas is crucial in determining treatment options and prognosis. Clinicians conventionally rely on multiple Magnetic Resonance Imaging (MRI) sequences for accurate glioma assessment. Traditional deep learning methods typically utilize the individual MRI sequence with the annotations of Region of Interest (ROIs), incurring substantial manual effort while losing the complementary information provided by other sequences. This task is inherently a standard multi-modal problem. However, existing methods often adopt complex multi-stream networks for feature extraction and fusion, leading to high resource demands and potential feature redundancy. To address the above challenge, a discrepancy-aware self-distillation method is proposed for multi-modal glioma grading, which requires only a single-stream network to concurrently analyze multiple MRI sequences without auxiliary ROIs. The first Modality Discrepancy-aware Fusion (MDF) module considers the MRI imaging differentiation and widens the inter-modal contrasts, allowing the modality-specific features to be highlighted and the modality-invariant features to be diminished. Furthermore, the proposed Class Activation Self-Distillation (CASD) strategy leverages the generated Class Activation Maps (CAMs) as dark knowledge for the distillation process. This guides shallow layers to focus on the category discriminative features specifically within the lesion region. Extensive experiments were conducted on the BraTS2018 and BraTS2019 datasets to evaluate the effectiveness of our method. The results show that our method outperforms other relevant glioma grading and multi-modal fusion methods on both datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call