Abstract

Emotions form the reflection of an individual's mental state. Automatic expression analysis and its recognition based on their facial landmarks are of critical use, particularly to understand the emotional states of physically disabled individuals like autistic children, deaf, dumb, and bedridden individuals. This approach helps in interpreting human emotions, which is crucial for designing computational models and interfaces that cater to their needs. While traditional research has focused on six basic emotion categories (happiness, surprise, sadness, anger, fear, disgust), your work expands this by exploring compound emotion categories. Compound emotions combine these basic categories to create nuanced emotional states. The research utilizes the CFEE_Database_230 dataset of facial expressions captured for analysis and training purposes. The proposed methodology has the following three steps: Analyze the dataset and extract the region of interest (ROI), Extract various statistical and potentially discriminating features and apply a multi-label classification approach to categorize sets of emotions. This involves comparing the feature values across different emotion classes and assigning appropriate labels, particularly focusing on compound emotion categories. Also, the research employs different classifiers and metrics to evaluate the effectiveness of the model. After applying the classification methods, the results are analyzed using various metrics to assess the accuracy and effectiveness of emotion recognition based on facial landmarks. Based on the findings, the Binary Relevance Method yielded the best performance with Mean Average Precision and hamming Loss of 0.7227±0.0344 and 0.1906±0.0227, respectively. Overall, the work contributes to advancing automatic emotion recognition by considering a broader range of emotional categories beyond the traditional basics. This is particularly beneficial for populations such as the physically disabled and autistic children, where traditional communication methods might be limited or challenging.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.