Abstract

Decoding a person’s cognitive contents from evoked brain activity is becoming important in the field of brain-computer interaction. Previous studies have decoded a perceived image from functional magnetic resonance imaging (fMRI) activity by constructing brain decoding models that were trained with a single subject’s fMRI data. However, accurate decoding is still challenging since fMRI data acquired from only a single subject have several disadvantageous characteristics such as small sample size, noisy nature, and high dimensionality. In this article, we propose a method to decode categories of perceived images from fMRI activity using shared information of multi-subject fMRI data. Specifically, by aggregating fMRI data of multiple subjects that contain a large number of samples, we extract a low-dimensional latent representation shared by multi-subject fMRI data. Then the latent representation is nonlinearly transformed into visual features and semantic features of the perceived images to identify categories from various candidate categories. Our approach leverages rich information obtained from multi-subject fMRI data and improves the decoding performance. Experimental results obtained by using two public fMRI datasets showed that the proposed method can more accurately decode categories of perceived images from fMRI activity than previous approaches using a single subject’s fMRI data.

Highlights

  • B RAIN-computer interfaces (BCIs) have the potential to develop an intuitive communication method between computer systems and humans

  • Multi-subject functional magnetic resonance imaging (fMRI) data were aligned via approaches based on canonical correlation analysis (CCA) [19]–[22] and an approach based on convolutional autoencoder (CAE) [23]

  • In this paper, we have presented a method for decoding categories of perceived images from fMRI activity using shared information of multi-subject fMRI data

Read more

Summary

INTRODUCTION

B RAIN-computer interfaces (BCIs) have the potential to develop an intuitive communication method between computer systems and humans. In recent brain decoding approaches [7]–[10], image categories (e.g., bird, human, and car) were estimated from functional magnetic resonance imaging (fMRI) activity measured when subjects perceived several images These attempts have the potential to enable the development of a BCI with an information retrieval system using queries of brain activity. We propose a method to decode categories of perceived images from fMRI activity using shared information of multi-subject fMRI data. After training the proposed method, we decode the category from fMRI activity acquired from a single subject who perceived an independent test image. In the preliminary work [17], the latent representation shared by multi-subject fMRI data was linearly transformed into visual features and semantic features. The proposed method can decode arbitrary categories from fMRI activity by comparing the transformed visual and semantic features with those features from a large number of

MULTI-SUBJECT FMRI DATA ANALYSIS
PROPOSED APPROACH
MULTI-SUBJECT BAYESIAN GENERATIVE MODEL
OPTIMIZATION OF MODEL PARAMETERS
TRANSFORMATION TO VISUAL AND SEMANTIC FEATURES VIA A MULTILAYER PERCEPTRON
DECODING OF IMAGE CATEGORIES FROM A SINGLE SUBJECT’S FMRI ACTIVITY
EXPERIMENTAL RESULTS
DATASET
Lawn furniture
90 Subject 1
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call