Abstract
Brain neural activity decoding is an important branch of neuroscience research and a key technology for the brain–computer interface (BCI). Researchers initially developed simple linear models and machine learning algorithms to classify and recognize brain activities. With the great success of deep learning on image recognition and generation, deep neural networks (DNN) have been engaged in reconstructing visual stimuli from human brain activity via functional magnetic resonance imaging (fMRI). In this paper, we reviewed the brain activity decoding models based on machine learning and deep learning algorithms. Specifically, we focused on current brain activity decoding models with high attention: variational auto-encoder (VAE), generative confrontation network (GAN), and the graph convolutional network (GCN). Furthermore, brain neural-activity-decoding-enabled fMRI-based BCI applications in mental and psychological disease treatment are presented to illustrate the positive correlation between brain decoding and BCI. Finally, existing challenges and future research directions are addressed.
Highlights
In recent years, the concept of the brain–computer interface has gradually entered the public’s field of vision and has become a hot topic in the field of brain research
This framework consists of a linear shape decoder, a semantic decoder based on deep neural networks (DNN), and an image generator based on generative confrontation network (GAN)
The challenges to brain decoding can be summarized in three aspects: 1. the ability of the mapping model between brain activity and visual stimuli; 2. not enough matching data between visual stimuli and brain activity; 3. Functional magnetic resonance imaging (fMRI) signals interfered by noise [108]
Summary
The concept of the brain–computer interface has gradually entered the public’s field of vision and has become a hot topic in the field of brain research. The difficulty in brain decoding is the reconstruction of visual stimuli through the cognition model of the brain’s visual cortex, as well as learning algorithms [21–24]. Researchers believe that the high-level visual cortex, i.e., the ventral temporal cortex, is the key to decoding the semantic information and natural images from brain activity [31]. The structure of deep neural networks (DNN) is similar to the feedforward of the human visual system [36], so it is not surprising that DNN can be used to decode the visual stimulus of the brain activity [30,37,38] mapped multi-level features of the human visual cortex to the hierarchical features of a pre-trained DNN, which can make use of the information from hierarchical visual features. With limited fMRI and labeled image data, the GCN-based decoding model can provide an automated tool to derive the cognitive state of the brain [47].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have