Abstract

Facial expressions are often considered the primary indicators of emotions. However, it is challenging to detect genuine emotions because they can be controlled. Many studies on emotion recognition have been conducted actively in recent years. In this study, we designed a convolutional neural network (CNN) model and proposed an algorithm that combines the analysis of bio-signals with facial expression templates to effectively predict emotional states. We utilized the EfficientNet-B0 architecture for network design and validation, known for achieving maximum performance with minimal parameters. The accuracy for emotion recognition using facial expression images alone was 74%, while the accuracy for emotion recognition combining biological signals reached 88.2%. These results demonstrate that integrating these two types of data leads to significantly improved accuracy. By combining the image and bio-signals captured in facial expressions, our model offers a more comprehensive and accurate understanding of emotional states.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.