Abstract

This paper aims to design a deep-learning based approach in combination with machine learning classifiers for two different perspectives. In first perspective, the performance is evaluated when training and testing are performed on same subject called as subject–dependent evaluation criteria. In second perspective, the performance is evaluated when training and testing are performed on different subjects called as subject–independent evaluation criteria. For each perspective, three label cases are made using valence, arousal, and dominance for recognizing human emotions: i) Binary/ 2-class, ii) Quad/ 4-class, and iii) Octal/ 8-class classifications. The experiment is performed on two publicly available datasets DEAP and DREAMER. For emotion recognition, firstly the brain signals are processed and then features are extracted using our proposed deep convolutional neural network (DCNN) architecture. These extracted features are used for emotion recognition using classifiers namely Naive Bayes (NB), decision tree (DT), k-Nearest Neighborhood (KNN), Support Vector Machine (SVM), AdaBoost (AB), Random Forest (RF), Neural Networks (NN), Long-short term memory (LSTM), and Bidirectional-LSTM (BiLSTM). The experimental results give more robust classification for subject-independent emotion recognition in comparison to subject-dependent emotion recognition, with DCNN + NN for binary and DCNN + SVM for quad & octal classification. Moreover, experimental results show that arousal and dominance play an important role in emotion recognition in contrary to valence and arousal as reported in literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call