Abstract

PurposeDeep learning has made remarkable progress in classifying autism spectrum disorder (ASD) using neuroimaging data. However, the current methods rely mainly on supervised learning, which requires a large amount of manually labeled data, making it an expensive and difficult task to scale. MethodsTo overcome this limitation, we propose a novel ensemble-based framework that learns a transferable and generalizable visual representation from different self-supervised features for the downstream task of ASD classification. This framework dynamically learns a superior representation by aggregating complementary information in the frequency domain from independent self-supervised features with limited data. Additionally, to address the information loss caused by the dimensionality reduction of 3D fMRI data, we propose a thresholding algorithm to optimally extract the most discriminant features from 2D rs-fMRI data. ResultsExperimental results demonstrate that the proposed method outperforms previous state-of-the-art methods by 19.69% on the ABIDE-1 dataset with a 10-fold cross-validation accuracy of 94.51%. ConclusionThe proposed method learns a transferrable and generalizable ensembled representation by leveraging complementary information encoded in different self-supervised representations for ASD classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.