A Novel Framework for the Accuracy Enhancement of Facial Expression Recognition System

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Facial Expression Recognition has become a promising field for more natural interactivity with computing devices and machines and has become the focus of attention for many research scholars over the past decade. Newly developed facial emotions recognition methods focus on neutral expression or six expressions used in most state-of-the-art methods. Accuracy is the main problem in the face recognition results. The problem that must be tackled is the optimization of the expression recognition algorithm i.e. to detect, isolate and correctly translate one of the major expressions of the human face with accuracy targeted towards 100%. This work will try to improve the accuracy of recognizing facial expression by using Histogram of Oriented Gradients (HOG) and Local Ternary Pattern (LTP).

Similar Papers
  • Research Article
  • Cite Count Icon 88
  • 10.1177/070674370505000905
Facial Expression and Sex Recognition in Schizophrenia and Depression
  • Aug 1, 2005
  • The Canadian Journal of Psychiatry
  • Benoit Bediou + 6 more

Impaired facial expression recognition in schizophrenia patients contributes to abnormal social functioning and may predict functional outcome in these patients. Facial expression processing involves individual neural networks that have been shown to malfunction in schizophrenia. Whether these patients have a selective deficit in facial expression recognition or a more global impairment in face processing remains controversial. To investigate whether patients with schizophrenia exhibit a selective impairment in facial emotional expression recognition, compared with patients with major depression and healthy control subjects. We studied performance in facial expression recognition and facial sex recognition paradigms, using original morphed faces, in a population with schizophrenia (n=29) and compared their scores with those of depression patients (n=20) and control subjects (n=20). Schizophrenia patients achieved lower scores than both other groups in the expression recognition task, particularly in fear and disgust recognition. Sex recognition was unimpaired. Facial expression recognition is impaired in schizophrenia, whereas sex recognition is preserved, which highly suggests an abnormal processing of changeable facial features in this disease. A dysfunction of the top-down retrograde modulation coming from limbic and paralimbic structures on visual areas is hypothesized.

  • Abstract
  • 10.1093/schbul/sbaa030.315
M3. THEORY OF MIND IN INDIVIDUALS WITH FIRST-EPISODE OF SCHIZOPHRENIA AND CHILDHOOD TRAUMA
  • May 1, 2020
  • Schizophrenia Bulletin
  • Natalia E Fares Otero + 10 more

BackgroundA history of Childhood Trauma (CT), i.e., physical or emotional abuse or neglect, and sexual abuse, is reportedly more prevalent in individuals suffering from psychosis than in the general population. Crucial questions remain unclear about the nature of interpersonal functioning in CT survivors, involving the capacity to understand and interpret other people′s thoughts and feelings, especially in individuals with First-Episode of Schizophrenia (FESz). We investigated the Theory of Mind (ToM) performance of patients with FESz related to CT in comparison to healthy controls (HC).MethodsParticipants (n=77) completed the Eye Task Revised (RMET) and the Childhood Experience of Care Abuse Questionnaire (CECA-Q). The Word Accentuation Test (TAP) was used to estimate a premorbid IQ. Seven-teen patients with FESz (Mean age = 24.9, SD = 5.4, Male = 79.6%; Education = 10.7, SD = 1.5 years) were recruited at the First-Episode Psychosis Program, Hospital 12 de Octubre Madrid, and 60 HC (Mean age = 27.6, SD = 7.2; Male = 45.6%; Education = 14.5, SD = 2.8 years) were healthy volunteers. Between-group comparisons were made using ANCOVA, considering group and CT as fixed factors. Age, years of education and IQ were included as covariates.ResultsPreliminary results showed that compared to controls, patients with FESz performed worse on the recognition and interpretation of facial expressions, in both male and female faces (p < .001). Patients with FESz did not perform differently than HC in the recognition and interpretation of positive facial expressions (p = .074). However, lower interpretation of negative facial expressions (p < .001) and of neutral facial expressions (p < .001) was shown in patients with FESz compared to HC. Higher interpretation of facial expressions was shown in FESz patients with CT (n = 12), only of female faces (p < .001), compared to patients without CT (n = 7). It was also shown higher interpretation of facial expressions in HC with CT (n = 28), only of negative facial expressions (p = .014), compared to HC without CT (n = 32). Female patients with FESz performed worse on the recognition and interpretation of negative (p = .024) and neutral faces (p < .001), only of male faces (p = .038), compared to female HC. Male patients with FESz performed worse on the recognition and interpretation of positive (p = .038) and negative facial expressions (p = .001) of male faces (p < .001), compared to male HC. In comparison to male FESz patients without CT, male FESz patients with CT showed higher interpretation of female faces (p = .030). In comparison to male HC without CT, male HC with CT showed higher interpretation of male faces (p = .031).DiscussionAccording to previous research, our preliminary findings indicated theory of mind deficits in patients with FESz. Interestingly, in our study the alterations on the interpretation and recognition of facial expressions were shown only of negative and neutral, but not of positive facial expressions. Furthermore, and contrary to literature, we found more interpretation and recognition of facial expressions in patients and healthy controls survivors of CT. However, the above-mentioned was specifically observed of female faces in patients and of negative facial expressions in healthy controls. In addition, female and male patients and healthy controls seem to interpret differently facial expressions related to childhood trauma. Nevertheless, increasing our sample size would give us the opportunity to draw further conclusions about how adverse experiences during childhood may influence social abilities in patients with FESz.

  • Research Article
  • Cite Count Icon 12
  • 10.1007/s11042-016-3883-3
A Video-Based Facial Motion Tracking and Expression Recognition System
  • Sep 1, 2016
  • Multimedia Tools and Applications
  • Jun Yu + 1 more

We proposed a facial motion tracking and expression recognition system based on video data. By a 3D deformable facial model, the online statistical model (OSM) and cylinder head model (CHM) were combined to track 3D facial motion in the framework of particle filtering. For facial expression recognition, a fast and efficient algorithm and a robust and precise algorithm were developed. With the first, facial animation and facial expression were retrieved sequentially. After that facial animation was obtained, facial expression was recognized by static facial expression knowledge learned from anatomical analysis. With the second, facial animation and facial expression were simultaneously retrieved to increase the reliability and robustness with noisy input data. Facial expression was recognized by fusing static and dynamic facial expression knowledge, the latter of which was learned by training a multi-class expressional Markov process using a video database. The experiments showed that facial motion tracking by OSM+CHM is more pose robust than that by OSM, and the facial expression score of the robust and precise algorithm is higher than those of other state-of-the-art facial expression recognition methods.

  • Research Article
  • Cite Count Icon 9
  • 10.1080/20008066.2023.2214388
Adults with a history of childhood maltreatment with and without mental disorders show alterations in the recognition of facial expressions
  • Jun 15, 2023
  • European Journal of Psychotraumatology
  • Lara-Lynn Hautle + 7 more

Background: Individuals with child maltreatment (CM) experiences show alterations in emotion recognition (ER). However, previous research has mainly focused on populations with specific mental disorders, which makes it unclear whether alterations in the recognition of facial expressions are related to CM, to the presence of mental disorders or to the combination of CM and mental disorders, and on ER of emotional, rather than neutral facial expressions. Moreover, commonly, recognition of static stimulus material was researched. Objective: We assessed recognition of dynamic (closer to real life) negative, positive and neutral facial expressions in individuals characterised by CM, rather than a specific mental disorder. Moreover, we assessed whether they show a negativity bias for neutral facial expressions and whether the presence of one or more mental disorders affects recognition. Methods: Ninety-eight adults with CM experiences (CM+) and 60 non-maltreated (CM−) adult controls watched 200 non-manipulated coloured video sequences, showing 20 neutral and 180 emotional facial expressions, and indicated whether they interpreted each expression as neutral or as one of eight emotions. Results: The CM+ showed significantly lower scores in the recognition of positive, negative and neutral facial expressions than the CM− group (p < .050). Furthermore, the CM+ group showed a negativity bias for neutral facial expressions (p < .001). When accounting for mental disorders, significant effects stayed consistent, except for the recognition of positive facial expressions: individuals from the CM+ group with but not without mental disorder scored lower than controls without mental disorder. Conclusions: CM might have long-lasting influences on the ER abilities of those affected. Future research should explore possible effects of ER alterations on everyday life, including implications of the negativity bias for neutral facial expressions on emotional wellbeing and relationship satisfaction, providing a basis for interventions that improve social functioning.

  • Research Article
  • Cite Count Icon 65
  • 10.1007/s11042-018-6040-3
Efficient facial expression recognition using histogram of oriented gradients in wavelet domain
  • May 4, 2018
  • Multimedia Tools and Applications
  • Swati Nigam + 2 more

Facial expression recognition plays a significant role in human behavior detection. In this study, we present an efficient and fast facial expression recognition system. We introduce a new feature called W_HOG where W indicates discrete wavelet transform and HOG indicates histogram of oriented gradients feature. The proposed framework comprises of four stages: (i) Face processing, (ii) Domain transformation, (iii) Feature extraction and (iv) Expression recognition. Face processing is composed of face detection, cropping and normalization steps. In domain transformation, spatial domain features are transformed into the frequency domain by applying discrete wavelet transform (DWT). Feature extraction is performed by retrieving Histogram of Oriented Gradients (HOG) feature in DWT domain which is termed as W_HOG feature. For expression recognition, W_HOG feature is supplied to a well-designed tree based multiclass support vector machine (SVM) classifier with one-versus-all architecture. The proposed system is trained and tested with benchmark CK+, JAFFE and Yale facial expression datasets. Experimental results of the proposed method are effective towards facial expression recognition and outperforms existing methods.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 40
  • 10.3389/fneur.2016.00230
Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson’s Disease
  • Dec 14, 2016
  • Frontiers in Neurology
  • Matteo Bologna + 6 more

Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all Ps < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all Ps > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all Ps > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  • Research Article
  • Cite Count Icon 83
  • 10.1007/s11036-019-01366-9
Human Behavior Understanding in Big Multimedia Data Using CNN based Facial Expression Recognition
  • Sep 9, 2019
  • Mobile Networks and Applications
  • Muhammad Sajjad + 4 more

Human behavior analysis from big multimedia data has become a trending research area with applications to various domains such as surveillance, medical, sports, and entertainment. Facial expression analysis is one of the most prominent clues to determine the behavior of an individual, however, it is very challenging due to variations in face poses, illuminations, and different facial tones. In this paper, we analyze human behavior using facial expressions by considering some famous TV-series videos. Firstly, we detect faces using Viola-jones algorithm followed by tracking through Kanade-Lucas-Tomasi (KLT) algorithm. Secondly, we use histogram of oriented gradients (HOG) features with support vector machine (SVM) classifier for facial recognition. Next, we recognize facial expressions using the proposed light-weight convolutional neural network (CNN). We utilize data augmentation techniques to overcome the issue of appearance of faces from different views and lightening conditions in video data. Finally, we predict human behaviors using an occurrence matrix acquired from facial recognition and expressions. The subjective and objective experimental evaluations prove better performance for both facial expression recognition and human behavior understanding.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/fbie.2008.53
Facial Expression Recognition and Synthesis on Affective Emotions Composition
  • Dec 1, 2008
  • Xu Chao + 1 more

Facial expressions recognition and synthesis are important research fields to study how human beings reflect to environments in affective computing. With the rapid development of mathematical theory on multivariate statistics and multi-media technology especially image processing, facial expressions recognition researchers have achieved many useful results. Recently studies show that approaches to facial modeling and expressions recognition and synthesis analysis could be adapted to control security or even real-time health monitoring in the real world. Similarity, how to achieve facial expressions recognition and synthesis for independent-free mechanism is a central design in general. Based on the cognitive analysis of independent user's affective facial recognition researches we proposed an emotions composition model to dope out what are the user's new affective facial expressions. At this point, principal component, cluster and discriminate analysis were applied to show and verify independent user's affective expressions can be composited for synthesis by basic facial expressions such as happy, neutral and unhappy. Experiments were conducted to prove our models to be very significant.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 32
  • 10.1109/taffc.2022.3181033
Domain-Incremental Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition
  • Oct 1, 2023
  • IEEE Transactions on Affective Computing
  • Nikhil Churamani + 2 more

As Facial Expression Recognition (FER) systems become integrated into our daily lives, these systems need to prioritise making <i>fair</i> decisions instead of only aiming at higher individual accuracy scores. From surveillance systems, to monitoring the mental and emotional health of individuals, these systems need to balance the <i>accuracy versus fairness</i> trade-off to make decisions that do not unjustly discriminate against specific under-represented demographic groups. Identifying <i>bias</i> as a critical problem in facial analysis systems, different methods have been proposed that aim to mitigate bias both at data and algorithmic levels. In this work, we propose the novel use of Continual Learning (CL), in particular, using Domain-Incremental Learning (Domain-IL) settings, as a potent bias mitigation method to enhance the <i>fairness</i> of Facial Expression Recognition (FER) systems. We compare different non-Continual Learning (CL)-based and CL-based methods for their <i>performance</i> and <i>fairness scores</i> on expression recognition and Action Unit (AU) detection tasks using two popular benchmarks, the RAF-DB and BP4D datasets, respectively. Our experimental results show that CL-based methods, on average, outperform other popular bias mitigation techniques on both <i>accuracy</i> and <i>fairness</i> metrics.

  • Research Article
  • 10.1002/alz.055432
Impairment of recognition of facial expressions in different types of dementia
  • Dec 1, 2021
  • Alzheimer's &amp; Dementia
  • Bahar Güntekin + 11 more

BackgroundImpairment of facial expression recognition in dementia is one of the cognitive deficits that could affect the social life of dementia patients. The severity of the facial expression recognition in different types of dementia has not been fully addressed. Previous studies showed that EEG event‐related oscillation (ERO) studies could reveal the brain dynamics during successful facial expression recognition. Furthermore, the impairment of facial recognition in Alzheimer's disease patients were represented with abnormal EROs (Güntekin et al., 2019). The present study aims to compare the impaired facial expression recognition between different types of dementia by analysis of event‐related delta responses. The role of delta responses in facial expression and emotional paradigms was reported previously by several researchers (Güntekin and Başar, 2014).Method25 Healthy elderly controls (HC), 25 Mild Cognitive Impairment patients (MCI), 25 patients with Alzheimer's disease (AD), 15 patients with Parkinson's disease with Mild Cognitive Impairment (PDMCI) and 16 patients with Parkinson's disease Dementia (PDD) were included in the study. The electroencephalographic activity was recorded during the perception of the facial expression recognition task (Angry, Happy, Neutral). Delta power and phase‐locking were computed for each facial recognition stimuli and compared among the groups (ANOVA, p&lt;0.05).ResultBoth delta power and delta phase locking were lower in dementia groups than healthy controls being worst in PDD. Figure 1 shows the grand average of delta power in all groups for the right occipital location. PDD (p&lt;0.0001) and AD (p&lt;0.05) had lower delta power than the healthy controls, the reduction of delta responses in PDD and AD were especially found over occipital locations. MCI and PDMCI subjects were found less impaired facial expression recognition compared to AD and PDMCI.ConclusionThe present study showed that dementia patients had severe facial recognition deficits that increased cognitive impairment. Event‐related delta responses successfully showed the impaired recognition of facial expression in different types of dementia groups. Among all dementia subjects, PDD patients had the most reduced delta responses that could be an electrophysiological biomarker of impaired facial expression recognition. Acknowledgments: This work (grant number 218S314) was supported by TUBITAK.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 217
  • 10.1111/j.1469-7610.2008.02020.x
Deficits in facial expression recognition in male adolescents with early-onset or adolescence-onset conduct disorder
  • Apr 21, 2009
  • Journal of Child Psychology and Psychiatry, and Allied Disciplines
  • Graeme Fairchild + 4 more

Background:We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that early-onset and adolescence-limited forms of CD are subject to different aetiological processes.Method:Male adolescents with either early-onset CD (n =42) or adolescence-onset CD (n =39), and controls with no history of serious antisocial behaviour and no current psychiatric disorder (n =40) completed tests of facial expression and facial identity recognition. Dependent measures were: (a) correct recognition of facial expressions of anger, disgust, fear, happiness, sadness, and surprise, and (b) the number of correct matches of unfamiliar faces.Results:Relative to controls, recognition of anger, disgust, and happiness in facial expressions was disproportionately impaired in participants with early-onset CD, whereas recognition of fear was impaired in participants with adolescence-onset CD. Participants with CD who were high in psychopathic traits showed impaired fear, sadness, and surprise recognition relative to those low in psychopathic traits. There were no group differences in facial identity recognition.Conclusions:Both CD subtypes were associated with impairments in facial recognition, although these were more marked in the early-onset subgroup. Variation in psychopathic traits appeared to exert an additional influence on the recognition of fear, sadness and surprise. Implications of these data for the developmental taxonomic theory of antisocial behaviour are discussed.

  • Research Article
  • Cite Count Icon 44
  • 10.1016/j.jad.2019.08.006
The role of the right prefrontal cortex in recognition of facial emotional expressions in depressed individuals: fNIRS study
  • Aug 5, 2019
  • Journal of Affective Disorders
  • Anna Manelis + 4 more

The role of the right prefrontal cortex in recognition of facial emotional expressions in depressed individuals: fNIRS study

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 11
  • 10.1007/s42452-020-03999-y
Iranian kinect face database (IKFDB): a color-depth based face database collected by kinect v.2 sensor
  • Jan 1, 2021
  • SN Applied Sciences
  • Seyed Muhammad Hossein Mousavi + 1 more

This study presents a new color-depth based face database gathered from different genders and age ranges from Iranian subjects. Using suitable databases, it is possible to validate and assess available methods in different research fields. This database has application in different fields such as face recognition, age estimation and Facial Expression Recognition and Facial Micro Expressions Recognition. Image databases based on their size and resolution are mostly large. Color images usually consist of three channels namely Red, Green and Blue. But in the last decade, another aspect of image type has emerged, named “depth image”. Depth images are used in calculating range and distance between objects and the sensor. Depending on the depth sensor technology, it is possible to acquire range data differently. Kinect sensor version 2 is capable of acquiring color and depth data simultaneously. Facial expression recognition is an important field in image processing, which has multiple uses from animation to psychology. Currently, there is a few numbers of color-depth (RGB-D) facial micro expressions recognition databases existing. With adding depth data to color data, the accuracy of final recognition will be increased. Due to the shortage of color-depth based facial expression databases and some weakness in available ones, a new and almost perfect RGB-D face database is presented in this paper, covering Middle-Eastern face type. In the validation section, the database will be compared with some famous benchmark face databases. For evaluation, Histogram Oriented Gradients features are extracted, and classification algorithms such as Support Vector Machine, Multi-Layer Neural Network and a deep learning method, called Convolutional Neural Network or are employed. The results are so promising.

  • Book Chapter
  • Cite Count Icon 5
  • 10.1007/978-3-319-19947-4_1
Face and Facial Expressions Recognition and Analysis
  • Sep 26, 2015
  • Jianfeng Ren + 2 more

Face recognition and facial expression analysis are essential abilities of humans, which provide the basic visual clues during human-computer interaction. It is important to enable the virtual human/social robot such capabilities in order to achieve autonomous behavior. Local binary pattern (LBP) has been widely used in face recognition and facial expression analysis. It is popular because of robustness to illumination variation and alignment error. However, local binary pattern still has some limitations, e.g. it is sensitive to image noise. Local ternary pattern (LTP), fuzzy LBP and many other LBP variants partially solve this problem. However, these approaches treat the corrupted image patterns as they are, and do not have an mechanism to recover the underlying patterns. In view of this, we develop a noise-resistant LBP to preserve the image micro-structures in presence of noise. We encode the small pixel difference as an uncertain state first, and then determine its value based on the other bits of the LBP code. Most image micro-structures are represented by uniform codes and non-uniform codes mainly represent noise patterns. Therefore, we assign the value of uncertain bit so as to form possible uniform codes. In such a way, we develop an error-correction mechanism to recover the distorted image patterns. In addition, we find that some image patterns such as lines are not captured in uniform codes. They represent a set of important local primitives for pattern recognition. We thus define an extended noise-resistant LBP (ENRLBP) to capture line patterns. NRLBP and ENRLBP are validated extensively on face recognition, facial expression analysis and other recognition tasks. They are shown more resistant to image noise compared with LBP, LTP and many other variants. These two approaches greatly enhance the performance of face recognition and facial expression analysis.

  • Research Article
  • Cite Count Icon 41
  • 10.1111/jnp.12130
Consequences of brain tumour resection on emotion recognition.
  • Jul 12, 2017
  • Journal of Neuropsychology
  • Giulia Mattavelli + 9 more

Emotion processing impairments are common in patients undergoing brain surgery for fronto-temporal tumour resection, with potential consequences on social interactions. However, evidence is controversial concerning side and site of lesions causing such deficits. This study investigates visual and auditory emotion recognition in brain tumour patients with the aim of clarifying which lesion sites are related to impairments in emotion processing from different modalities. Thirty-four patients were evaluated, before and after surgery, on facial expression and emotional prosody recognition; voxel-based lesion-symptom mapping (VLSM) analyses were performed on patients' post-surgery MRI images. Results showed that patients' performance decreased after surgery in both visual and auditory modalities, but, in general, recovered 3months after surgery. In facial expression recognition, left brain-damaged patients showed greater post-surgery deterioration than right brain-damaged ones, whose performance specifically decreased for sadness and fear. VLSM analysis revealed two segregated areas in the left hemisphere accounting for post-surgery scores for happy (fronto-temporo-insular region) and surprised (middle frontal gyrus and inferior fronto-occipital fasciculus) facial expressions. Our findings demonstrate that surgical removal of tumours in the fronto-temporal region produces impairment in facial emotion recognition with an overall recovery at 3months, suggesting a partially different representation of positive and negative emotions in the left and right hemispheres for visually - but not auditory - presented emotions; moreover, we show that deficits in specific expression recognition are associated with discrete lesion locations.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon