Abstract

Our study proposes a novel Parallel Temporal–Spatial-Frequency Neural Network (PTSFNN) for emotion recognition. The network processes EEG signals in the time, frequency, and spatial domains simultaneously to extract discriminative features. Despite its relatively simple architecture, the proposed model achieves superior performance. Specifically, PTSFNN first applies wavelet transform to the raw EEG signals and then reconstructs the coefficients based on frequency hierarchy, thereby achieving frequency decomposition. Subsequently, the core part of the network performs three independent parallel convolution operations on the decomposed signals, including a novel graph convolutional network. Finally, an attention mechanism-based post-processing operation is designed to effectively enhance feature representation. The features obtained from the three modules are concatenated for classification, with the cross-entropy loss function being adopted. To evaluate the model’s performance, extensive experiments are conducted on the SEED and SEED-IV public datasets. The experimental results demonstrate that PTSFNN achieves excellent performance in emotion recognition tasks, with classification accuracies of 87.63% and 74.96%, respectively. Comparative experiments with previous state-of-the-art methods confirm the superiority of our proposed model, which can efficiently extract emotion information from EEG signals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.