Abstract

Recently, various multimodal approaches to enhancing the performance of affective models have been developed. In this paper, we investigate the complementary representation properties of EEG and eye movement signals on classification for five human emotions: happy, sad, fear, disgust, and neutral. We compare the performance of single modality and two different modality fusion approaches. The results indicate that EEG is superior to eye movements in classifying happy, sad and disgust emotions, whereas eye movements outperform EEG in recognizing fear and neutral emotions. Compared with eye movements, EEG has the advantage of classifying the five emotions, with the mean accuracies of 69.50% and 59.81%, respectively. Due to the complementary representation properties, the modality fusion with bimodal deep auto-encoder significantly improves the classification accuracy to 79.71%. Furthermore, we study the neural patterns of five emotion states and the recognition performance of different eye movement features. The results reveal that five emotions have distinguishable neural patterns and pupil diameter has a relatively high discrimination ability than the other eye movement features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call