Abstract

Existing deep neural networks for brain tumor segmentation from multimodal MRIs rely predominantly on standard segmentation architectures, overlooking the underlying rules in clinical scenarios. To address this gap, we propose a novel deep multimodal network that architecturally mimics these rules and fully replicates the process of manually annotating brain tumors by human experts in machines. Three significant manual annotation rules are mimicked. The three sub-tasks are introduced first to segment three regions sequentially. We incorporate a fine-grained modality attention module in each sub-task to mimic region-aware multimodal fusion. Finally, we propose the deep feature propagation module for multi-scale context exploitation and repurpose deep cascaded attention for location information exploitation. These exploits imitate earlier segmentation knowledge exploitation. We use a new curriculum loss to train the nested network and a stratified training approach to reduce the intramodality domain shift caused by the dataset’s distribution. The proposed approach is evaluated and compared to state-of-the-art methods on three public datasets: BraTS2018, BraTS2019 and BraTS2020. The Dice similarity coefficient on Brats2020 is 0.800,0.930 and 0.857 for enhancing tumor, whole tumor, and tumor core, respectively, ranking sixth top among 285 participating methods. These findings suggest the potential applicability of our approach in brain tumor segmentation from multimodal MRIs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call