Abstract
The recognition of fine-grained emotions (i.e., happiness, sad, etc.) has shown its importance in a real-world implementation. The emotion recognition using physiological signals is a challenging task due to the precision of the labelled data while using facial expressions is less appropriate for the real environment. This work proposes a framework for fusing physiological signals and facial expressions modalities to improve classification performance. The feature-level fusion (FLF) and decision-level fusion (DLF) techniques are explored in this work to recognise seven fine-grained emotions. The performance of the proposed framework is evaluated using 34 subjects' data. Our result shows that the fusion of the multiple modalities could improve the overall accuracy compared to the unimodal system by 11.66% and 13.63% for facial expression and physiological signals, respectively. Our work achieved a 73.23% accuracy for seven emotions which is considerable accuracy for the spontaneous emotion corpus.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Ad Hoc and Ubiquitous Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.