Abstract
Automated emotion recognition (AEE) plays a crucial role in numerous industries that depend on understanding human emotional responses, such as advertising, technology, and human-robot interaction, particularly within the Information Technology (IT) field. However, current systems can often come up short in comprehensively understanding an individual's emotions, as prior research has mainly focused on assessing facial expressions and categorizing them into seven primary emotions, including neutrality. In this study, we present several Deep Convolutional Neural Network (CNN) models designed specifically for the task of facial emotion recognition, utilizing the FER2013 and RAF datasets. The baseline CNN model is established through a trial-and-error method, and its results are compared with more complex deep learning techniques, including ResNet18, VGGNet16, VGGNet19, and Efficient Net-B0 models. Among these models, the VGGNet19 model achieved the best results with a test accuracy of 71.02% on the FER2013 dataset. In comparison, the ResNet18 model outperformed all other models with an 86.02% test accuracy on the RAF-DB dataset. These results underscore the potential for advancing automated emotion recognition through complex deep-learning techniques.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.