Abstract

The recognition of fine-grained emotions (i.e., happiness, sad, etc.) has shown its importance in a real-world implementation. The emotion recognition using physiological signals is a challenging task due to the precision of the labelled data while using facial expressions is less appropriate for the real environment. This work proposes a framework for fusing physiological signals and facial expressions modalities to improve classification performance. The feature-level fusion (FLF) and decision-level fusion (DLF) techniques are explored in this work to recognise seven fine-grained emotions. The performance of the proposed framework is evaluated using 34 subjects' data. Our result shows that the fusion of the multiple modalities could improve the overall accuracy compared to the unimodal system by 11.66% and 13.63% for facial expression and physiological signals, respectively. Our work achieved a 73.23% accuracy for seven emotions which is considerable accuracy for the spontaneous emotion corpus.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call