Abstract

Emotion recognition is an essential task in many fields, including affective computing, psychology, and human-computer interaction. The ability to accurately recognize emotions from facial expressions can help in the development of more personalized and responsive systems. The objective of this paper is to investigate the effectiveness of a Convolutional Neural Network (CNN) model in classifying emotions from facial expressions. We used the images classified into seven different categories from FER2013 dataset. The model is implemented using TensorFlow Keras, which consists of four CNN layers followed by two fully connected (FCN) layers and an output layer. In between each layer we use batch normalization, max pooling, and dropout to reduce calculation cost and prevent overfitting. Our model achieved an accuracy of 66.85% and a precision accuracy of 77.18% on the training set after 20 epochs. The recall and the F-1 score is 56.93% and 0.6545 respectively. This study demonstrates that the proposed CNN model can effectively recognize emotions in facial images. The availability of the FER2013 dataset provides researchers with an opportunity to further explore this area of research. The proposed model can be useful in various applications, such as affective computing, human-computer interaction, and psychology. Future work could involve tuning the hyperparameters of the model or using different datasets to explore the generalizability of the proposed model. Overall, our study provides insights into the effectiveness of CNNs for emotion recognition and highlights the potential for further research in this area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call