Abstract

Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.

Highlights

  • Brain–computer interface has been used for decades in the biomedical engineering field to control devices using brain signals [1]

  • We presented an accurate multi-modal EEG-based human recognition using a bag of deep features (BoDF), which reduces the size of features from all the channels which are used for detecting brain signals

  • The first channels are used by the DEAP dataset, while all 62 channels are used by the SEED dataset to acquire EEG signals

Read more

Summary

Introduction

Brain–computer interface has been used for decades in the biomedical engineering field to control devices using brain signals [1]. The second disadvantage is the ability to choose individual related functions that can cause duplication To overcome this issue, the evolutionary-based feature selection method proposed by [23] evaluated on DEAP and MAHNOB dataset. Differential evolutionary (DE) based features selection method was classified using a probabilistic neural network (PNN) and achieved a classification accuracy of 77.8% and 79.3% on MAHNOB and DEAP datasets, respectively. [31] suggested the emotion recognition method based on the entropy of samples Their experimental results corresponding to channels related to the emotional state are primarily from the frontal lobe areas, namely F3, CP5, FP2, FZ, and FC2. We presented an accurate multi-modal EEG-based human recognition using a bag of deep features (BoDF), which reduces the size of features from all the channels which are used for detecting brain signals.

SEED Dataset
DEAP Dataset
Electrode to Channel Mapping
Methodology
Time Frequency Representation
Feature Extraction
Stage 1: k-Mean Clustering
Stage 2
Classification
Results and Discussion
Conclusions
Methods
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.