Facial Emotion Recognition of Virtual Humans with Different Genders, Races, and Ages
Research studies suggest that racial and gender stereotypes can influence emotion recognition accuracy both for adults and children. Stereotypical biases have severe consequences in social life but are especially critical in domains such as education and healthcare, where virtual humans have been extending their applications. In this work, we explore potential perceptual differences in the facial emotion recognition accuracy of virtual humans of different genders, races, and ages. We use realistic 3D models of male/female, Black/White, and child/adult characters. Using blendshapes and the Facial Action Coding System, we created videos of the models displaying facial expressions of six universal emotions with varying intensities. We ran an Amazon Mechanical Turk study to collect perceptual data. The results indicate statistically significant main effects of emotion type and intensity on emotion recognition accuracy. Although overall emotion recognition accuracy was similar across model race, gender, and age groups, there were some statistically significant effects across different groups for individual emotion types.
- Research Article
57
- 10.1007/s00787-020-01709-y
- Jan 7, 2021
- European Child & Adolescent Psychiatry
Children with attention-deficit/hyperactivity disorder (ADHD) symptoms often experience social and emotional problems. Impaired facial emotion recognition has been suggested as a possible underlying mechanism, although impairments may depend on the type and intensity of emotions. We investigated facial emotion recognition in children with (subthreshold) ADHD and controls using a novel task with children’s faces of emotional expressions varying in type and intensity. We further investigated associations between emotion recognition accuracy and social and emotional problems in the ADHD group. 83 children displaying ADHD symptoms and 30 controls (6–12 years) completed the Morphed Facial Emotion Recognition Task (MFERT). The MFERT assesses emotion recognition accuracy on four emotions using five expression intensity levels. Teachers and parents rated social and emotional problems on the Strengths and Difficulties Questionnaire. Repeated measures analysis of variance revealed that the ADHD group showed poorer emotion recognition accuracy compared to controls across emotions (small effect). The significant group by expression intensity interaction (small effect) showed that the increase in accuracy with increasing expression intensity was smaller in the ADHD group compared to controls. Multiple regression analyses within the ADHD group showed that emotion recognition accuracy was inversely related to social and emotional problems, but not prosocial behavior. Not only children with an ADHD diagnosis, but also children with subthreshold ADHD experience impairments in facial emotion recognition. This impairment is predictive for social and emotional problems, which may suggest that emotion recognition may contribute to the development of social and emotional problems in these children.
- Research Article
- 10.1177/17455057251359761
- Jul 1, 2025
- Women's health (London, England)
Research suggests that women with polycystic ovary syndrome (PCOS) are more likely to suffer from mental health disorders, emotional distress, and have altered hormone profiles (e.g., higher androgens). Past research suggests facial emotion processing is affected by hormones (e.g., androgens), mental health-related disorders, and may be altered in PCOS. The present study examined whether facial emotion recognition (FER) differs between women with and without PCOS symptoms. Observational case-control design. Three groups of participants (women with provisional PCOS, women without PCOS, and men; N = 178) completed a FER task that involved identifying emotions (anger, disgust, fear, happiness, sadness, surprise, or neutral) in images of emotional faces. Overall emotion recognition and emotion-specific accuracy were examined. PCOS symptom severity and provisional diagnoses were also assessed in women via self-report measures, including the polycystic ovary syndrome questionnaire. Women with provisional PCOS had significantly lower emotion recognition accuracy than those without PCOS, and emotion-specific differences were found for fear and disgust. A significant linear effect also emerged for overall FER, revealing men as the least accurate, followed by women with provisional PCOS, and then women without PCOS. The results suggest that women with PCOS may have difficulty with emotion recognition, especially fear and disgust. The sex difference in emotion recognition was in line with previous research. These findings are consistent with the theory that androgens affect emotion recognition and suggest implications for PCOS symptoms on women's emotional well-being and socioemotional functioning.
- Research Article
7
- 10.3390/diagnostics12071721
- Jul 15, 2022
- Diagnostics
The Facial Feedback Hypothesis (FFH) states that facial emotion recognition is based on the imitation of facial emotional expressions and the processing of physiological feedback. In the light of limited and contradictory evidence, this hypothesis is still being debated. Therefore, in the present study, emotion recognition was tested in patients with central facial paresis after stroke. Performance in facial vs. auditory emotion recognition was assessed in patients with vs. without facial paresis. The accuracy of objective facial emotion recognition was significantly lower in patients with vs. without facial paresis and also in comparison to healthy controls. Moreover, for patients with facial paresis, the accuracy measure for facial emotion recognition was significantly worse than that for auditory emotion recognition. Finally, in patients with facial paresis, the subjective judgements of their own facial emotion recognition abilities differed strongly from their objective performances. This pattern of results demonstrates a specific deficit in facial emotion recognition in central facial paresis and thus provides support for the FFH and points out certain effects of stroke.
- Conference Article
23
- 10.1109/smc.2013.785
- Oct 1, 2013
Expression recognition or Emotional state recognition using holistic and feature information is the vital step in Driver Assistance System. Many researchers have work on Facial Gesture or Emotion recognition independently. The purpose of the present paper is to deal with Simultaneous Facial Gesture tracking and Emotion recognition with Soft Computing tool like Fuzzy rule based system (FBS). In Human Centered Transportation large number of road accidents took place due to drowsiness or bad mood of the driver. The system proposed in this paper take into account both the Facial Gesture tracking and Emotion recognition so that if there is any sign of less attentiveness of the driver or driver's fatigue the car will be switch to automatic mode. A novel fuzzy system is created, whose rules is being defined through analysis of Facial Gesture variations. The idea behind this paper is to detect Facial Gesture by detecting the motion of eyes & lips along with classification of different facial expressions into one of the four basic human emotions, viz. happy, anger, sad, and surprise with fuzzy rule based system for better system performance. The given system proposes 91.66% accuracy for Facial Gesture detection & 90% accuracy for Emotion recognition while using Simultaneous Facial Gesture detection and Emotion recognition it provides 94.58% accuracy.
- Conference Article
- 10.1109/aiam57466.2022.00032
- Oct 1, 2022
With the rapid development of artificial intelligence and machine learning in recent years, emotion recognition has gradually become an important research topic. Emotion recognition in one direction has a good research foundation after long-term development, and from multiple directions, more effective information can be extracted, thereby improving the accuracy of emotion recognition. This paper analyzes from the perspective of emotional recognition of physiological signals such as brainwave signals and facial emotion recognition, respectively, preprocessing, feature extraction, SVM feature classification, LSTM combined with convolutional neural network emotion recognition for the acquired signals. And the accuracy of mixed-modal emotion recognition is compared. Compared with single facial expression emotion recognition, mixed-modal emotion recognition extracts more feature information and has a higher accuracy.
- Conference Article
1
- 10.1109/ijcnn.2015.7280323
- Jul 1, 2015
Automatic facial expression recognition plays an important role in agent-based interface development and datadriven animation. This paper presents an intelligent facial action and emotion recognition system for a humanoid robot. Motivated by the Facial Action Coding System, this research focuses on the recognition of seven basic emotions and 18 Action Units (AU). Since effective facial representations of original face images are vital for automatic facial emotion recognition, this research implements a novel shape and appearance feature extraction method, which integrates an Independent Active Appearance Model (AAM) with a rotation-invariant feature point detector, BRISK (Binary Robust Invariant Scalable Keypoints). In comparison to AAM with a traditional inverse compositional fitting, our model with BRISK fitting is with less computational cost and is capable of dealing with feature extraction from images of faces with rotations and scaling differences without prior training required. Subsequently shape and appearancebased neural network AU analyzers are used to respectively detect 18 AUs. Emotions are then decoded from the derived AUs using a neural network emotion recognizer. The system is integrated with a modern humanoid robot platform. Evaluation results indicate its high accuracy for AU and emotion recognition. It is also among the top performers on the extended Cohn-Kanade (CK+) database in comparison to other existing state-of-the-art applications.
- Research Article
96
- 10.1002/da.20456
- Nov 21, 2007
- Depression and Anxiety
The primary aim of this study was to investigate facial emotion recognition in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and twenty healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of facial emotion recognition and the 26-item Toronto Alexithymia Scale (TAS-26). Patients with SFD exhibited elevated alexithymia symptoms relative to healthy controls. Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in facial emotion recognition observed in the patients with SFD was most likely a consequence of concurrent alexithymia. Impaired facial emotion recognition observed in the patients with SFD could plausibly have a negative influence on these individuals' social functioning.
- Supplementary Content
- 10.26199/acu.8v90q
- Mar 1, 2021
The Role of Oxytocin in Older Adults’ Facial Emotion Recognition Difficulties
- Research Article
18
- 10.1002/da.20440
- Nov 1, 2008
- Depression and Anxiety
The primary aim of this study was to investigate facial emotion recognition (FER) in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and 20 healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of FER and the 26-item Toronto Alexithymia Scale. Patients with SFD exhibited elevated alexithymia symptoms relative to healthy controls. Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistics. This suggests that the deficit in FER observed in the patients with SFD was most likely a consequence of concurrent alexithymia. It should be noted that neither depression nor anxiety was significantly related to emotion recognition accuracy, suggesting that these variables did not contribute the emotion recognition deficit. Impaired FER observed in the patients with SFD could plausibly have a negative influence on these individuals' social functioning.
- Book Chapter
17
- 10.1007/978-3-319-12640-1_42
- Jan 1, 2014
Facial Emotion recognition is a significant requirement in machine vision society. In this sense, this paper utilizes geometric facial features and calculates displacement of feature points between expressive and neutral frames and finally applies a two-stage fuzzy reasoning model for facial emotion recognition and classification. The prototypical emotion sequence according to the Facial Action Coding System (FACS) is formed analyzing small, medium and large displacement. Furthermore geometric displacements are fuzzified and mapped onto an Action Units (AUs) by employing first-stage fuzzy reasoning model and later AUs are fuzzified and mapped onto an Emotion space by employing second-stage fuzzy relational model. The overall performance of the proposed system is evaluated on the extended Cohn-Kanade (CK+) database for classifying basic emotions like surprise, sadness, fear, anger, and happiness. The experimental results on the task of facial emotion analysis and emotion recognition are shown to outperform other existing methods available in the literature.
- Dissertation
- 10.53846/goediss-7499
- Feb 21, 2022
Emotion recognition is a key component of human social cognition and is considered vital for many domains of life. Studies measuring this ability have documented that performance accuracy in emotion recognition tasks is affected by various factors, ranging from gender, one’s own confidence, hormonal fluctuations, to the modality of stimulus presentation (i.e., audio, visual). The majority of work has focused on the recognition of facial expressions. The results from the small amount of studies that made comparisons across the modalities of vocal and facial emotion recognition are contradictive, suggesting a lack of reliability across studies. Therefore, the main aim of this research project was to investigate the impact of above-mentioned factors on individuals’ accuracy of performance while accounting for methodological shortcomings from previous research. Two independent but related studies were conducted. In Study 1, the first aim was to examine whether performance accuracy differs as a function of listeners’ and speakers’ gender. The second aim was to investigate the influence of vocal stimulus types and their related acoustic parameters on emotion recognition and confidence ratings. Additionally, it was explored whether the correct recognition of vocal emotions elicits confidence judgments. Study 2 was pre-registered and aimed to account for previous assumptions regarding males’ ‘poor’ emotion recognition ability by investigating whether the modality of stimulus presentation (i.e., audio, visual, audio-visual) and hormonal fluctuations (i.e., testosterone, cortisol and their interaction) impact their performance accuracy and response time in emotion recognition tasks. In both studies, participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. The results from Study 1 showed that speakers’ gender had a significant impact on how listeners’ judged emotions from the voice, yet, no robust differences were observed regarding the performance accuracy of recognizing emotions by listeners’ gender (manuscript 1). Additionally, the results obtained from this study replicate previous findings by showing that participants could recognize emotions based on differential acoustic patterning. They further add to previous research by demonstrating that emotional expressions are more accurately recognized and confidently judged from non-speech sounds than from emotionally inflected speech. Moreover, they showed that listeners who were better at recognizing vocal expressions of emotion were also more confident in their judgments (manuscript 2). The results from Study 2 indicated that emotion recognition accuracy and response time are greatly improved for the audio-visual presentation of emotional expressions. In addition, they showed that happy expressions are identified faster and with greater accuracy from faces than voices, while angry expressions are better recognized in voices compared to faces. Finally, the overall effect sizes of testosterone by cortisol interaction on emotion recognition accuracy and response time were small yet significant (manuscript 3). The combined findings from both studies explain inconsistencies in the existing literature by highlighting the importance of distinguishing between these factors when assessing emotion recognition ability. This research project actively contributes to a scientific domain that is currently re-writing our understanding on the role these factors play for the recognition of emotions. It hereby paves the way for impactful future research.
- Research Article
1
- 10.1111/acer.14653
- Aug 1, 2021
- Alcoholism: Clinical and Experimental Research
Alcohol intoxication is associated with significant negative social consequences. Social information processing theory provides a framework for understanding how the accurate decoding and interpretation of social cues are critical for effective social responding. Acute intoxication has the potential to disrupt facial emotion recognition. If alcohol impairs the processing and interpretation of emotional cues, then the resultant behavioral responses may be less effective. The current study tested the association between alcohol intoxication and facial emotion recognition in a naturalistic field study of intoxicated participants. 114 participants (59.4% men; Mage =24.2years) who had been consuming alcohol were recruited in the downtown area of a mid-size town surrounded by several drinking establishments in the mid-southern United States. Participants were shown images depicting 5 facial displays of emotions (happy, sad, anger, disgust, and no emotion) portrayed by 1male and 1 female actor per emotion and breath alcohol concentration (BrAC) was measured by the field breathalyzer test (M=0.078%, SD=0.052). BrAC was significantly negatively associated with emotion recognition accuracy when controlling for average alcohol use, B=-.35, t=-2.08, p<0.05, F(3, 110)=5.28, p<0.01, R2 =0.13. A significant BrAC×gender interaction was revealed, B=-0.39, t=-2.07, p=0.04, ΔR2 =0.033, p=0.04, such that men (but not women) displayed a significant negative association between BrAC and emotion recognition accuracy. Acute intoxication was associated with impaired facial emotion recognition, particularly for men, in a field study context. Findings support and extend some previous experimental laboratory-based research and suggest that intoxication can impair the decoding stage of social information processing.
- Research Article
40
- 10.1145/3512925
- Mar 30, 2022
- Proceedings of the ACM on Human-Computer Interaction
The growth of technologies promising to infer emotions raises political and ethical concerns, including concerns regarding their accuracy and transparency. A marginalized perspective in these conversations is that of data subjects potentially affected by emotion recognition. Taking social media as one emotion recognition deployment context, we conducted interviews with data subjects (i.e., social media users) to investigate their notions about accuracy and transparency in emotion recognition and interrogate stated attitudes towards these notions and related folk theories. We find that data subjects see accurate inferences as uncomfortable and as threatening their agency, pointing to privacy and ambiguity as desired design principles for social media platforms. While some participants argued that contemporary emotion recognition must be accurate, others raised concerns about possibilities for contesting the technology and called for better transparency. Furthermore, some challenged the technology altogether, highlighting that emotions are complex, relational, performative, and situated. In interpreting our findings, we identify new folk theories about accuracy and meaningful transparency in emotion recognition. Overall, our analysis shows an unsatisfactory status quo for data subjects that is shaped by power imbalances and a lack of reflexivity and democratic deliberation within platform governance.
- Research Article
2
- 10.15276/hait.06.2023.13
- Oct 12, 2023
- Herald of Advanced Information Technology
The relevance of solving the problem of facial emotion recognition on human images in the creation of modern intelligent systems of computer vision and human-machine interaction, online learning and emotional marketing, health care and forensics, machine graphics and game intelligence is shown. Successful examples of technological solutions to the problem of facial emotion recognition using transfer learning of deep convolutional neural networks are shown. But the use of such popular datasets as DISFA, CelebA, AffectNet, for deep learning of convolutional neuralnetworks does not give good results in terms of the accuracy of emotion recognition, because almost all training sets have fundamental flaws related to errors in their creation, such as the lack of data of a certain class, imbalance of classes, subjectivity and ambiguity of labeling, insufficient amount of data for deep learning, etc. It is proposed to overcome the noted shortcomings of popular datasets for emotion recognition by adding to the training sample additional pseudo-labeled images with human emotions, on which recognition occurs with high accuracy. The aim of the research is to increase the accuracy of facial emotion recognitionon the image of a human by developing a pseudo-labeling method for transfer learning of a deep neural network. To achieve the aim, the following tasks were solved: a convolutional neural network model, previously trained on the ImageNet set using the transfer learning method, was adjusted on the RAF-DB data set to solve emotion recognition tasks; a pseudo-labeling method of the RAF−DB set data was developed for semi-supervised learning of a convolutional neural network model for the task of facial emotion recognition; the accuracy of facial emotion recognition was analyzed based on the developed convolutional neural network model and the method of pseudo-labeling of RAF-DB set data for its correction. It is shown that the use of the developed method of pseudo-labeling data and transfer learning of the MobileNet V1 convolutional neural network model allowed to increase the accuracy of facial emotion recognitionon the images of the RAF-DB dataset by 2 percent (from 76 to 78%) according to the F1 estimate. Atthe same time, taking into account the significant imbalance of the classes, for the 7 main emotions in the trainingset, we have a significant increase in the accuracy of recognizing a few representatives of such emotions as surprise (from 71 to 77%), fearful(from 64 to 69%), sad (from 72 to 76%), angrywith (from 64 to 74%), neutral(from 66 to 71%). The accuracy of recognizing the emotion of happy, which is the most common, decreased (from 91 to 86 %) Thus, it can be concluded that the use of the developed pseudo-labeling method gives good results in overcoming such shortcomings of datasets for deep learning of convolutional neural networks such as lack of data of a certain type, imbalance of classes, insufficient amount of data for deep learning, etc.
- Research Article
32
- 10.1016/0891-4222(95)00025-i
- Sep 1, 1995
- Research in Developmental Disabilities
The relationships among facial emotion recognition, social skills, and quality of life