Abstract

Neural decoding is of great importance in computational neuroscience to automatically interpret brain activities in order to address the challenging problem of mind-reading. Analyzing the vision-related EEG records is of great importance to discern the relation between visual perception and brain activity. Considering the recent advances and achievements in the field of deep neural networks, several architectures have been implemented to decode brain activities. In this paper, functional connectivity-based geometric deep network (FC-GDN) is proposed to leverage the spatio-temporal distributed information in EEG recordings evoked by images to directly extract hidden states of high-resolution time samples considering the functional connectivity between EEG channels. To this end, a topological connectivity graph is constructed based on the functional connectivity between EEG channels and time samples of each EEG channel are considered as a graph signal on top of corresponding graph node. Furthermore, a novel graph neural network architecture based on this efficient graph representation of EEG signals is proposed, in which visually provoked EEG recordings are used as training data in order to decode visual perception state of the participants in terms of extracted EEG patterns related to different image categories. The performance of the proposed FC-GDN is evaluated on the EEG-ImageNet dataset, consisting of 40 image categories and each category includes 50 sample images, shown to 6 participants while their EEG signals were recorded. The average accuracy of 98.4% is obtained for FC-GDN, showing an average improvement of 1.1% compared to the best state-of-the-art method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call