Abstract

Multimodal magnetic resonance imaging (MRI) provides complementary information about targets, and the segmentation of multimodal MRI is widely used as an essential preprocessing step for initial diagnosis, stage differentiation, and post-treatment efficacy evaluation in clinical situations. For the main modality or each of the modalities, it is important to enhance the visual information by modeling the connection and effectively fusing the features among them. However, the existing methods for multimodal segmentation have a drawback; they coincidentally drop information of individual modality during the fusion process. Recently, graph learning-based methods have been applied in segmentation, and these methods have achieved considerable improvements by modeling the relationships across feature regions and reasoning using global information. In this paper, we propose a graph learning-based approach to efficiently extract modality-specific features and establish regional correspondence effectively among all modalities. In detail, after projecting features into a graph domain and employing graph convolution to propagate information across all regions for learning global modality-specific features, we propose a mutual information-based graph co-attention module to learn the weight coefficients of one bipartite graph constructed by the fully connected graphs having different modalities in the graph domain and by selectively fusing the node features. Based on the deformation diagram between the spatial-graph space and our proposed graph co-attention module, we present a multimodal prior-guided segmentation framework, which uses two strategies for two clinical situations: <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Modality-Specific Learning Strategy</i> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Co-Modality Learning Strategy</i> . Besides, the improved <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Co-Modality Learning Strategy</i> is used with trainable weights in the multi-task loss for the optimization of the proposed framework. We validated our proposed modules and frameworks on two multimodal MRI datasets: our private liver lesion dataset and a public prostate zone dataset. Our experimental results on both datasets prove the superiority of our proposed approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call