Abstract

Emotion recognition plays an important role in di-agnosing and treating many mental disorders as well as affective computing. Among six basic emotions, anger and surprise are relatively hard to be elicited in lab settings, and the complementary representation properties of encephalography (EEG) and eye movement signals on recognizing anger and surprise emotions remain unknown. Although the transformer architecture has the ability of parallelism which avoids many sequential operations as recurrent and convolutional layers, the knowledge of its performance and effectiveness on multimodal emotion recognition from EEG and eye movement signals is limited. To tackle these issues, we elaborately design the experiment and stimuli materials to effectively elicit surprise, anger, and neutral emotions, and propose an Emotion Transformer Fusion (ETF) model based on pure attention mechanism. Results of extensive experiments with multiple models on our dataset indicate that the complementary information of EEG and eye movements significantly improves the performance of discriminating anger, surprise and neutral emotions. Meanwhile, our proposed architecture outperforms baseline models with higher parallelism, which proves the capability of Transformer based architecture on multimodal emotion recognition with EEG and eye movement signals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.