Abstract
AbstractThis paper presents a contactless system based on twin channels of thermal and visual image sequences to register the affective states of an individual during Human–Computer Interaction (HCI). The negative affective states such as stress, anxiety, and depression in students have raised significant concerns. The first phase obtains the dominant emotional state by an ensemble of cues from visual and thermal facial images using a newly proposed cascaded Convolutional Neural Network (CNN) model named as EmoScale. The second phase clusters a sequence of the obtained emotional states using a trained Hidden Markov model (HMM) as one of the three affective states anxiety, depression, and stress. We perform fivefold cross-validation of EmoScale on our self-prepared dataset. The performance of the second phase is compared with a standard Depression Anxiety Stress Scale (DASS) and the results are found to be promising.KeywordsEmotion RecognitionAffective statesSCNNFacial ExpressionHCI
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have