Abstract

Affective computing researchers have been studying emotion recognition (ER) based on real-world facial photos and videos for a while now, and it’s a very popular area. With the sounds produced by head posture, facial deformation, and lighting fluctuation, ER is still difficult to execute in the wild. This research employs a machine learning model for sustainable artificial intelligence (AI) in entertainment computing to perform emotion decoding for movie picture categorization. Emotion labels are produced for every video sample by the suggested model, which takes video data as input. First, we employ a face identification and selection process based on the video data to identify the most consequential face areas. Here, facial expressions from films have been gathered as input pictures, which have then been processed for noise reduction and normalisation. Then, in order to analyse the facial expressions in this image, it was segmented using the fuzzy K-means equalisation clustering model. Convolutional adversarial U-net graph neural networks have been used to classify the studied pictures for emotion decoding. Several movie-based emotion datasets are subjected to experimental analysis in order to determine accuracy, precision, recall, F-1 score, RMSE, and AUC. With excellent classification accuracy, the suggested deep learning paradigm shows promise in identifying the emotional shifts in gamers. The suggested method achieved 97% accuracy, 96% precision, 92% recall, 85% F-1 score, 79% RMSE, and 86% AUC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call