Abstract

In recent years, many researchers have explored different methods to obtain discriminative features for electroencephalogram-based (EEG-based) emotion recognition, but a few studies have been investigated on deaf subjects. In this study, we have established a deaf EEG emotion dataset, which contains three kinds of emotion (positive, neutral, and negative) with 15 subjects. Ten kinds of time–frequency domain features and eleven kinds of nonlinear dynamic system features were extracted from the EEG signals. To obtain the optimal feature combination and optimal classifier, an integrated genetic firefly algorithm (IGFA) was proposed. The multi-objective function with variable weight was utilized to balance the classification accuracy and the feature reduction ratio that are contradictory goals to find brighter fireflies in each generation. To retain the historical optimal solution and reduce the feature dimension, an optimal population protection scheme and subgroups generation scheme was carried out. The experimental results show that the averaged feature reduction rate of the proposed method is 0.959, and the averaged classification accuracy is 0.961. By investigating important brain regions, deaf subjects have common areas in the frontal and temporal lobes for EEG emotion recognition, while individual areas occur in the occipital and parietal lobes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call