Abstract

Neuroimaging plays an significant role in diagnosing and pathological study of brain diseases. Considering that both functional and structural abnormalities may lead to brain dis-eases and disorders, single modal neuroimaging approach may not fully characterize brain activities and working modes. Fusion of multimodal neuroimaging data is expected to provide more comprehensive characterization of brain diseases, given that the different modalities contain more complementary information. Recently, Graph Convolutional Networks (GCNs) is shown to have powerful capacity in representation learning for graph-structure data, which is considered to integrate both graph se-mantic structure and node information. Therefore, in this paper, we propose the Weighted Graph AutoEncoder (WGAE), a GCN- driven multimodal fusion model, to learn the combinational latent node representation of fMRI and DTI neuroimaging data, which are used as node features and graph structure respectively in the graph in unsupervised manner. Experimental results on two real-world datasets show the superiority of the proposed model over other existing single-modal or multi-modal methods in learning representations for disease prediction as the downstream task. Furthermore, ablation experiments also show the collaborative contribution of multimodal neuroimaging fusion in the proposed model, and also show the feasibility of assessing the respective importance of the two modalities during the disease prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call