Abstract

Visual stimulus evoked potentials are neural oscillations acquired, from the brain’s electrical activity evoked, while seeing an image or video as stimuli. With the advancement of deep learning techniques, decoding visual stimuli evoked EEG (ElectroEncephaloGram) signals has become a versatile study of neuroscience and computer vision alike. Deep learning techniques have capability to learn problem specific features automatically, which eliminates the traditional feature extraction procedure. In this proposed work, convolutional neural network (CNN) based classification model is used to classify visual stimuli evoked EEG signals while seeing a 10-class (i.e., 0–9 digit’s images) MindBig dataset without the need of an additional feature extraction step. The raw EEG signal is converted to spectrogram images as CNN is known to work finely with images. Three pretrained CNN-based model AlexNet, VGGNet, and ResNet have been trained to decide the ideal parameters and structure of the proposed CNN-based model. The architecture of proposed CNN model comprises 4 convolutional layers, a max pooling layer and a fully connected layer which take spectrogram images as input and classify EEG signals evoked from 10-class digit images. The overall average accuracy 91.29% is achieved which outperforms the pretrained CNN models.KeywordsElectroEncephaloGramVisual stimuliConvolutional neural networkMindBigDataSpectrogram

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call