Abstract

Many fully automatic segmentation models have been created to solve the difficulty of brain tumor segmentation, thanks to the rapid growth of deep learning. However, few approaches focus on the long-range relationships and contextual interdependence in multimodal Magnetic Resonance (MR) images. In this paper, we propose a novel approach for brain tumor segmentation called the dual graph reasoning unit (DGRUnit). Two parallel graph reasoning modules are included in our proposed method: a spatial reasoning module and a channel reasoning module. The spatial reasoning module models the long-range spatial dependencies between distinct regions in an image using a graph convolutional network (GCN). The channel reasoning module uses a graph attention network (GAT) to model the rich contextual interdependencies between different channels with similar semantic representations. Our experimental results clearly demonstrate the superior performance of the proposed DGRUnit. The ablation study shows the flexibility and generalizability of our model, which can be easily integrated into a wide range of neural networks and further improve them. When compared to several state-of-the-art methods, experimental results show that the proposed approach significantly improves both visual inspection and quantitative metrics for brain tumor segmentation tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call