Abstract

PurposeDeep learning has made remarkable progress in classifying autism spectrum disorder (ASD) using neuroimaging data. However, the current methods rely mainly on supervised learning, which requires a large amount of manually labeled data, making it an expensive and difficult task to scale. MethodsTo overcome this limitation, we propose a novel ensemble-based framework that learns a transferable and generalizable visual representation from different self-supervised features for the downstream task of ASD classification. This framework dynamically learns a superior representation by aggregating complementary information in the frequency domain from independent self-supervised features with limited data. Additionally, to address the information loss caused by the dimensionality reduction of 3D fMRI data, we propose a thresholding algorithm to optimally extract the most discriminant features from 2D rs-fMRI data. ResultsExperimental results demonstrate that the proposed method outperforms previous state-of-the-art methods by 19.69% on the ABIDE-1 dataset with a 10-fold cross-validation accuracy of 94.51%. ConclusionThe proposed method learns a transferrable and generalizable ensembled representation by leveraging complementary information encoded in different self-supervised representations for ASD classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call