Abstract

Analyzing and understanding emotion can help in various aspects, such as realizing one’s attitude, behavior, etc. By understanding one’s emotions, one's mental health state can be calculated, which can help in the medical field by classifying whether one is mentally stable or not. Facial Recognition is one of the many fields of computer vision that utilizes convolutional networks or Conv Nets to perform, train, and learn. Conv Nets and other machine learning algorithms have evolved to adapt better to larger datasets. One of the advancements in Conv Nets and machines is the introduction of various Conv architectures like VGGNet. Thus, this study will present a mental health state classification approach based on facial emotion recognition. The methodology comprises several interconnected components, including preprocessing, feature extraction using Principal Component Analysis (PCA) and VGGNet, and classification using Support Vector Machines (SVM) and Multilayer Perceptron (MLP). The FER2013 dataset tests multiple models’ performances, and the best model is employed in the mental health state classification. The best model, which combines Visual Geometry Group Network (VGGNet) feature extraction with SVM classification, achieved an accuracy of 66%, demonstrating the effectiveness of the proposed methodology. By leveraging facial emotion recognition and machine learning techniques, the study aims to develop an effective method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call