Abstract

A crucial point in neuroscience is how to correctly decode cognitive information from brain dynamics for motion control and neural rehabilitation. However, due to the instability and high dimensions of electroencephalogram (EEG) recordings, it is difficult to directly obtain information from original data. Thus, in this work, we design visual experiments and propose a novel decoding method based on the neural manifold of cortical activity to find critical visual information. First, we studied four major frequency bands divided from EEG and found that the responses of the EEG alpha band (8–15 Hz) in the frontal and occipital lobes to visual stimuli occupy a prominent place. Besides, the essential features of EEG data in the alpha band are further mined via two manifold learning methods. We connect temporally consecutive brain states in the t distribution random adjacency embedded (t-SNE) map on the trial-by-trial level and find the brain state dynamics to form a cyclic manifold, with the different tasks forming distinct loops. Meanwhile, it is proved that the latent factors of brain activities estimated by t-SNE can be used for more accurate decoding and the stable neural manifold is found. Taking the latent factors of the manifold as independent inputs, a fuzzy system-based Takagi-Sugeno-Kang model is established and further trained to identify visual EEG signals. The combination of t-SNE and fuzzy learning can highly improve the accuracy of visual cognitive decoding to 81.98%. Moreover, by optimizing the features, it is found that the combination of the frontal lobe, the parietal lobe, and the occipital lobe is the most effective factor for visual decoding with 83.05% accuracy. This work provides a potential tool for decoding visual EEG signals with the help of low-dimensional manifold dynamics, especially contributing to the brain–computer interface (BCI) control, brain function research, and neural rehabilitation.

Highlights

  • The human brain readily makes sense of visual images with specific dynamics in a complex environment, but how to quantify the visual response remains poorly understood (Kourtzi and Kanwisher, 2000; Pasley et al, 2012)

  • Decoding human brain activity triggered by visual stimuli has a significant impact on brain–computer interface (BCI), brain-inspired computing, and machine vision research (Hogendoorn and Burkitt, 2018)

  • Cunningham and Byron point out that the majority of sensory, cognitive, and motor functions depend on the interactions among many neurons, and data cannot be fundamentally understood based on a single neuron (Cunningham and Byron, 2014)

Read more

Summary

Introduction

The human brain readily makes sense of visual images with specific dynamics in a complex environment, but how to quantify the visual response remains poorly understood (Kourtzi and Kanwisher, 2000; Pasley et al, 2012). Decoding Visual Stimulation From Manifold to cortex, which triggers specific dynamics to achieve cognitive functions such as memory and envision (de Beeck et al, 2008; Wen et al, 2018). It has been demonstrated that human brain activity can be decoded from neurological data in recent research (Zheng et al, 2020), with the neurological data tending to be high dimensional and unstable, it is difficult to decode useful information directly from the complex neural data. This study of the neural system is undergoing a transition from a single neuron level to a population level (Pandarinath et al, 2018). A new analytical method is needed to decode brain activity directly from a population perspective and investigate the visual mechanisms

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call