Abstract

Functional magnetic resonance imaging (fMRI) is widely used in the field of brain semantic decoding. However, as fMRI data acquisition is time-consuming and expensive, the number of samples is usually small in the existing fMRI datasets. It is difficult to build an accurate brain decoding model for a subject with insufficient fMRI data. The majority of semantic decoding methods focus on designing predictive model with limited samples, while less attention is paid to fMRI data augmentation. Leveraging data from related but different subjects can be regarded as a new strategy to improve the performance of predictive model. There are two challenges when using information from different subjects: 1) feature mismatch; 2) distribution mismatch. In this paper, we propose a multi-subject fMRI data augmentation method to address the above two challenges, which can improve the decoding accuracy of the target subject. Specifically, the subject information can be translated from one to another by using multiple subject-specific encoders, decoders and discriminators. The encoder maps each subject to a shared latent space, solving the feature mismatch problem. The decoders and discriminators form multiple generative adversarial network architectures, which solves the distribution mismatch problem. Meanwhile, to ensure that the representation of the latent space preserves information of the input space, our method not only minimizes the local data reconstruction loss, but also preserves the sparse reconstruction (semantic) relation over the whole dataset of the input space. Extensive experiments on three fMRI datasets demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.