Abstract
Functional magnetic resonance imaging (fMRI) is a powerful technique with the potential to estimate individual variations in behavioral and cognitive traits. Joint learning of multiple datasets can utilize their complementary information so as to improve learning performance, but it also gives rise to the challenge for data fusion to effectively integrate brain patterns elicited by multiple fMRI data. However, most of the current data fusion methods analyze each single dataset separately and further infer the relationship among them, which fail to utilize the multidimensional structure inherent across modalities and may ignore complex but important interactions. To address this issue, we propose a novel sparse tensor decomposition method to integrate multiple task-stimulus (paradigm) fMRI data. Seeing each paradigm fMRI as one modality, our proposed method considers the relationships across subjects and modalities simultaneously. In specific, a third-order tensor is first modeled by using the functional network connectivity (FNC) of subjects in multiple fMRI paradigms. A novel sparse tensor decomposition with the regularization terms is designed to factorize the tensor into a series of rank-one components, which can extract the shared components across modalities as the embedded features. The L2,1-norm regularizer (i.e., group sparsity) is enforced to select a few common features among multiple subjects. Validation of the proposed method is performed on realistic three paradigm fMRI datasets from the Philadelphia Neurodevelopmental Cohort (PNC) study, for the study of the relationship between the FNC and human cognitive abilities. Experimental results show our method outperforms several other competing methods in the prediction of individuals with different cognitive behaviors via the wide range achievement test (WRAT). Furthermore, our method discovers the FNC related to the cognitive behaviors, such as the connectivity associated with the default mode network (DMN) for three paradigms, and the connectivity between DMN and visual (VIS) domains within the emotion task.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.