Abstract

The recognition of affective states is important for regulating stress levels and maintaining mental health, and it is known that affective states can be inferred from physiological signals. However, in practice, the problem of affect recognition contains many challenges due to various types of external noise and different individual characteristics. This paper proposes a deep learning model called Attention-LRCN for recognizing affective states from photoplethysmography (PPG) signals. We construct a long-term recurrent convolutional network to extract temporal features from spectrograms, and a novel attention module is introduced to alleviate the effect of noise components in PPG signals. Moreover, to improve the recognition accuracy, we propose a weighted knowledge distillation technique, which is a teacher–student learning framework. We quantify the uncertainty of teacher’s predictions, and the predictive uncertainty is utilized to adaptively compute the weight of the distillation loss. To demonstrate the effectiveness of the proposed method, experiments were conducted on the WESAD dataset, which is a public dataset for stress and affect detection. We also collected our own dataset from 34 subjects to verify the accuracy of the proposed method. Experimental results demonstrate that the proposed method significantly outperforms previous algorithms on both the public and real-world datasets. The code is available at https://github.com/ziiho08/Attention-LRCN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call