Abstract

Physiological signals have been widely used for emotion recognition, but current works seldom apply the feature fusion and attention technologies to ECG emotion recognition. In this paper, we propose a novel ECG emotion recognition method, which adopts a spatial and temporal ECG emotion recognition model based on dynamic feature fusion (DFF-STM) to learn spatial-temporal representations of different ECG areas. Considering the difference in roles played by the different ECG areas in ECG emotion recognition, a dynamic weight distribution layer is introduced into DFF-STM to extract ECG temporal features and learn weights to adjust (e.g., enhance or weaken) the contribution of the ECG areas at the same time. Finally, we conduct experiments using real ECG data on the AMIGOS dataset to evaluate the performance of the DFF-STM on valence and arousal labels. Experiments show that dynamic feature fusion for ECG emotion recognition is much better than those using only handcraft features and deep features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call