Abstract

Multi-modal brain networks characterize the complex connectivities among different brain regions from structure and function aspects, which have been widely used in the analysis of brain diseases. Although many multi-modal brain network fusion methods have been proposed, most of them are unable to effectively extract the spatio-temporal topological characteristics of brain network while fusing different modalities. In this paper, we develop an adaptive multi-channel graph convolution network (GCN) fusion framework with graph contrast learning, which not only can effectively mine both the complementary and discriminative features of multi-modal brain networks, but also capture the dynamic characteristics and the topological structure of brain networks. Specifically, we first divide ROI-based series signals into multiple overlapping time windows, and construct the dynamic brain network representation based on these windows. Second, we adopt adaptive multi-channel GCN to extract the spatial features of the multi-modal brain networks with contrastive constraints, including multi-modal fusion InfoMax and inter-channel InfoMin. These two constraints are designed to extract the complementary information among modalities and specific information within a single modality. Moreover, two stacked long short-term memory units are utilized to capture the temporal information transferring across time windows. Finally, the extracted spatio-temporal features are fused, and multilayer perceptron (MLP) is used to realize multi-modal brain network prediction. The experiment on the epilepsy dataset shows that the proposed method outperforms several state-of-the-art methods in the diagnosis of brain diseases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call