Abstract

Emotion recognition using electroencephalogram signals has been widely studied in the last decade, achieving artificial intelligence models that accurately classify primitive or primary emotions. However, most of these models focus on signal processing methods to better recognize multiclass targets, ignoring efficient denoising methods to reduce artifacts in input samples. Therefore, this study proposes two-dimension reduction algorithms derived from machine learning models based on electroencephalogram channel selection and conflict learning. The last two approaches exploit the electroencephalogram signals from the SEED-V dataset as input data. Next, a wavelet noise-estimate frequency decomposition and a 1-D Local Binary Pattern (LBP) are applied to achieve a histogram per signal. After applying a feature extraction method, the targets per sample are adapted to yield the most relevant electroencephalogram channels and obtain a highly competitive machine-learning model that uses only the FCZ and CP4 electrodes. Additionally, relevant findings based on conflict learning yield that samples with “Happy” and “Disgust” targets had numerous artifacts compared with the other studied emotions (“Fear”, “Sad”, and “Neutral) but achieved superior performance than the channel selection method. The proposed framework reached accuracy rates per dimension reduction near 90% accuracy and between 87% and 92.8% using the F1-score metric. Hence, the classification results are highly competitive with state-of-the-art close-related methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call