Abstract

Building a brain-computer fusion system that would integrate biological intelligence and machine intelligence became a research topic of great concern. Recent research has proved that human brain activity can be decoded from neurological data. Meanwhile, deep learning has become an effective way to solve practical problems.Taking advantage of these trends, in this paper, we propose a novel method of decoding brain activity evoked by visual stimuli. To achieve this goal, we first introduce a combined long short-term memory—convolutional neural network (LSTM-CNN) architecture to extract the compact category-dependent representations of electroencephalograms (EEG). Our approach combines the ability of LSTM to extract sequential features and the capability of CNN to distil local features. Next, we employ an improved spectral normalization generative adversarial network (SNGAN) to conditionally generate images using the learned EEG features. We evaluate our approach in terms of the classification accuracy of EEG and the quality of the generated images.The results show that the proposed LSTM-CNN algorithm that discriminates the object classes by using EEG can be more accurate than the existing methods. In qualitative and quantitative tests, the improved SNGAN performs better in the task of generating conditional images from the learned EEG representations; the produced images are realistic and highly resemble the original images.Our method can reconstruct the content of visual stimuli according to the brain’s response. Therefore, it helps to decode the human brain activity by using an image-EEG-image transformation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call