Abstract

Resting-state brain networks represent the interconnectivity of different brain regions during rest. Utilizing brain network analysis methods to model these networks can enhance our understanding of how different brain regions collaborate and communicate without explicit external stimuli. However, analyzing resting-state brain networks faces challenges due to high heterogeneity and noise correlation between subjects. This study proposes a brain structure learning-guided multi-view graph representation learning method to address the limitations of current brain network analysis and improve the diagnostic accuracy (ACC) of mental disorders. We first used multiple thresholds to generate different sparse levels of brain networks. Subsequently, we introduced graph pooling to optimize the brain network representation by reducing noise edges and data inconsistency, thereby providing more reliable input for subsequent graph convolutional networks (GCNs). Following this, we designed a multi-view GCN to comprehensively capture the complexity and variability of brain structure. Finally, we employed an attention-based adaptive module to adjust the contributions of different views, facilitating their fusion. Considering that the Smith atlas offers superior characterization of resting-state brain networks, we utilized the Smith atlas to construct the graph network. Experiments on two mental disorder datasets, the Autism Brain Imaging Data Exchange (ABIDE) dataset and the Mexican Cocaine Use Disorders (SUDMEX CONN) dataset, show that our model outperforms the state-of-the-art methods, achieving nearly 75% ACC and 70% area under the receiver operating characteristic curve (AUC) on both datasets. These findings demonstrate that our method of combining multi-view graph learning and brain structure learning can effectively capture crucial structural information in brain networks while facilitating the acquisition of feature information from diverse perspectives, thereby improving the performance of brain network analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.