Abstract
To establish a multi-dimensional representation solely on structural MRI (sMRI) for early diagnosis of AD. A total of 3377 participants' sMRI from four independent databases were retrospectively identified to construct an interpretable deep learning model that integrated multi-dimensional representations of AD solely on sMRI (called s2MRI-ADNet) by a dual-channel learning strategy of gray matter volume (GMV) from Euclidean space and the regional radiomics similarity network (R2SN) from graph space. Specifically, the GMV feature map learning channel (called GMV-Channel) was to take into consideration spatial information of both long-range spatial relations and detailed localization information, while the node feature and connectivity strength learning channel (called NFCS-Channel) was to characterize the graph-structured R2SN network by a separable learning strategy. The s2MRI-ADNet achieved a superior classification accuracy of 92.1% and 91.4% under intra-database and inter-database cross-validation. The GMV-Channel and NFCS-Channel captured complementary group-discriminative brain regions, revealing a complementary interpretation of the multi-dimensional representation of brain structure in Euclidean and graph spaces respectively. Besides, the generalizable and reproducible interpretation of the multi-dimensional representation in capturing complementary group-discriminative brain regions revealed a significant correlation between the four independent databases (p < 0.05). Significant associations (p < 0.05) between attention scores and brain abnormality, between classification scores and clinical measure of cognitive ability, CSF biomarker, metabolism, and genetic risk score also provided solid neurobiological interpretation. The s2MRI-ADNet solely on sMRI could leverage the complementary multi-dimensional representations of AD in Euclidean and graph spaces, and achieved superior performance in the early diagnosis of AD, facilitating its potential in both clinical translation and popularization.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have