Abstract

The identification of human emotions is pivotal in several domains, including interactions between humans and computers, medical settings, and the entertainment industry. Lately, combining the study of facial expressions with the examination of emotions through voice has attracted considerable interest for its ability to improve the precision and reliability of emotion detection systems. This project suggests an in-depth examination of recognizing human emotional states through facial expressions alongside auditory signal analysis, utilizing the VGG Face model and the technique of Mel-Frequency Cepstral Coefficients (MFCC) for feature extraction, all driven by algorithms based on machine learning

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call