AbstractFacial expression recognition system is an advanced technology that allows machines to recognize human emotions based on their facial expressions. In order to develop a robust prediction model, this research work proposes three distinct architectural models to produce a facial expression prediction system that looks like this: The first model is on using a support vector machine to carry out a classification task. As a follow-up to the second model, an attempt was made to create a Convolution Neural Network (CNN) using the VGG-NET (Visual Geometry Group Network). Following analysis of the results, an attempt was made to enhance the outcome using the third model, which used convolutional sequential layers linked to seven distinct expressions, and an inference was drawn based on loss and accuracy metric behavior. We will use a dataset of human picture facial images in this research, which has more than 35500 facial photographs and represents seven different types of facial expressions. We will analyze our data and make every effort to remove as much noise as we can before feeding that information to our model. We use the confusion matrix to assess the model’s performance after it has been implemented effectively. To demonstrate the effectiveness of our model architecture, we will generate bar graphs and scatter plots for each model to display model loss and accuracy. The output of this model is visualized with actual class and predictive class and the result has a graphical representation for each and every output facial Images which makes our recognition system user-friendly.
Read full abstract