Abstract
Neuroimaging studies often collect multimodal data. These modalities contain both shared and mutually exclusive information about the brain. This work aims to find a scalable and interpretable method to fuse the information of multiple neuroimaging modalities into a lower-dimensional latent space using a variational autoencoder (VAE). To assess whether the encoder-decoder pair retains meaningful information, this work evaluates the representations using a schizophrenia classification task. The linear classifier, trained on the representations obtained through dimensionality reduction, achieves an area under the curve of the receiver operating characteristic (ROC-AUC) of 0.8609. Thus, training on a multimodal dataset with functional brain networks and a structural magnetic resonance imaging (sMRI) scan, leads to dimensionality reduction that retains meaningful information. The proposed dimensionality reduction outperforms both early and late fusion principal component analysis on the classification task.Clinical relevance - This work examines the interplay between neuroimaging modalities and their relation to mental disorders. This allows for more complex and rigorous analysis of multimodal neuroimaging data throughout clinical settings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.