Abstract

In the past, Electroencephalography (EEG)-based emotion identification research mainly focused on healthy, disordered, or depressed subjects. Due to the lack of an emotion acquisition and expression channel, there may be a bias in affective cognition for the hearing impaired. In this work, 15 hearing-impaired subjects were recruited to participate in an emotional movie-induced experiment, while their EEG signals were collected for emotion recognition. Four types of movie clips, which contained happiness, calmness, sadness, and fear, were selected. We fused three kinds of features, differential entropy (DE), power spectral density (PSD), and functional brain network to integrate the frequency domain and the spatial domain information of EEG signals. To find the brain region connections of different emotions, different coupling methods, and binarization methods were explored. Moreover, the combination of different brain network attributes was used to find the representativeness of emotional information in the spatial domain. The experimental results show that the combination of phase-locked value (PLV) & 20% sparsity with four brain network attributes (clustering coefficient (CC), node degree (ND), node betweenness centrality (NBC), and node strength (NS)) achieved a higher accuracy of 94.17% for functional brain network-based emotion recognition. In addition, the method of feature fusion obtained a classification accuracy of 95.91% with an SVM-Linear classifier. Moreover, by constructing functional brain network, the key connectivity paths of different emotions are explored, which may provide insights into understanding the emotional processing of the hearing-impaired.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call