Facial expression recognition based on FECN under artificial intelligence
Facial expression recognition based on FECN under artificial intelligence
- Research Article
2
- 10.59214/cultural/3.2023.34
- Jul 29, 2023
- Interdisciplinary Cultural and Humanities Review
The research relevance is determined by the importance of a thorough study of methods, schemes and models used by artificial intelligence to mechanise creativity in modern conditions of active technological development. The study aims to analyse the main processes taking place in modern art in connection with active technologization of work processes, to identify the leading concepts regarding the possibility of creating machine art in the future, etc. The employed methods are theoretical, such as analysis, systematisation, generalisation, etc., for studying key problems and further development of creativity based on artificial intelligence. The study examines in detail the main developments of Artificial General Intelligence and Artificial Narrow Intelligence, in particular the achievements of Generative adversarial networks and Creative adversarial networks. Artificial intelligence-generated art demonstrates the remarkable capabilities of technologies. The evolving artificial intelligence in the arts introduces “digital art”. Generative Adversarial Networks are used as a foundational tool for artists who use digital methods and texture generation to create unique compositions. Furthermore, sculptors collaborate with artificial intelligence tools to convert drawings into 3D models or transform historical art databases into sculptures. Creative thinking, a hallmark of human intelligence, is determined as artificial intelligence’s ability to generate new and original ideas. The development of emotional intelligence in artificial intelligence enables empathetic responses and the identification of human emotions through voice and facial expressions. The issues of authorised internationality, awareness of the creative process, psychological foundations of artificial empathy and emotional intelligence define the prospects for the development of neuroscience. Challenges persist in defining creativity, authorship, and legal aspects of artificial intelligence-generated art. The study materials may be useful for artists, art educators, technologists, and researchers interested in the intersection of technology and art, legal professionals (especially intellectual property law), and individuals involved in artificial intelligence development may find these findings valuable
- Research Article
1
- 10.59214/cultural/1.2024.34
- Feb 29, 2024
- Interdisciplinary Cultural and Humanities Review
The research relevance is determined by the importance of a thorough study of methods, schemes and models used by artificial intelligence to mechanise creativity in modern conditions of active technological development. The study aims to analyse the main processes taking place in modern art in connection with active technologization of work processes, to identify the leading concepts regarding the possibility of creating machine art in the future, etc. The employed methods are theoretical, such as analysis, systematisation, generalisation, etc., for studying key problems and further development of creativity based on artificial intelligence. The study examines in detail the main developments of Artificial General Intelligence and Artificial Narrow Intelligence, in particular the achievements of Generative adversarial networks and Creative adversarial networks. Artificial intelligence-generated art demonstrates the remarkable capabilities of technologies. The evolving artificial intelligence in the arts introduces “digital art”. Generative Adversarial Networks are used as a foundational tool for artists who use digital methods and texture generation to create unique compositions. Furthermore, sculptors collaborate with artificial intelligence tools to convert drawings into 3D models or transform historical art databases into sculptures. Creative thinking, a hallmark of human intelligence, is determined as artificial intelligence’s ability to generate new and original ideas. The development of emotional intelligence in artificial intelligence enables empathetic responses and the identification of human emotions through voice and facial expressions. The issues of authorised internationality, awareness of the creative process, psychological foundations of artificial empathy and emotional intelligence define the prospects for the development of neuroscience. Challenges persist in defining creativity, authorship, and legal aspects of artificial intelligence-generated art. The study materials may be useful for artists, art educators, technologists, and researchers interested in the intersection of technology and art, legal professionals (especially intellectual property law), and individuals involved in artificial intelligence development may find these findings valuable
- Conference Article
4
- 10.1109/cnmt.2009.5374558
- Dec 1, 2009
The expression of Gabor wavelet filter is provided, it is explored in detail. In according to actual demand, a new multichannel filter based Gabor wavelet is designed based on theory and practicality. Its center frequency is the range from low frequency to high frequency, its orientation is 6 and scale is 6. It can extract the feature of low quality facial expression image target, and have good robust for automatic facial expression recognition. Experimental results show that the performance of the proposed method is excellent when it is applied to facial expression recognition system. Nowadays, there has been a growing interest in improving aspects of the interaction between humans and computers. It is argued that the facial expressions play an essential role in social interactions with other human beings. Facial expression is a major way of human emotional communication. It is a visible and mutative manifestation of human cognitive activity and psychopathology. It is reported that facial expression constitutes 55% of the effect of a communicated message while language and voice constitute 7% and 38% respectively. With the rapid development of computer vision and artificial intelligence, facial expression recognition becomes the key technology of advanced human computer interaction. More and more people have been paying attention to expression recognition. The research objective of facial expression recognition is how to automatically, reliably, efficaciously use its conveying information. It is a typical issue in model- identification that the automatic recognition system's property is decided by the represented facial expression feature. Therefore, the feature extraction is very important to the facial expressions recognition process. If inadequate features are provided, even the best classifier could fail to achieve accurate recognition. In most cases of facial expression classification, the process of feature extraction yields a definitively large number of features and subsequently a smaller sub-set of features needs to be selected according to some optimality criteria. Gabor filters have been proved to be effective for expression recognition because of its superior capability of multi-scale representation. Gabor wavelet can use very better description of biological visual neuron about receptive field, .According to the needs of special vision, it can adjust the spatial and frequency properties to face expression characteristic wanted, so Gabor filter wavelet is suitable for people face analysis and treatment of expression. In this paper, we pay attention to extract features useful for classification and recognition. The object is the static image. we can obtain the static image utilizing the video tools. The method is simple. It can reliably extract the typical feature and acquire the higher recognition rate. We utilize the responses of Gabor filters which is six orientations and six scales. Experimental results show that the performance of the proposed method is excellent when it is applied to automatic facial expression recognition system. The remainder of this paper is organized as follows: Section 2 of the paper describes Gabor filter's principle, property and the feature characterization in detail. Then the adaptation scheme for choosing the orientation and frequency of Gabor filter to extract the facial expression feature will be performed. The convolution output of the original image is also presented in Section 2. In Section 3,some experimental results are shown and explained. Finally, conclusions and future work are given in Section 4.
- Research Article
2
- 10.3390/a18080473
- Jul 30, 2025
- Algorithms
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to run efficiently on mobile devices or edge devices, so the research on lightweight face expression recognition is particularly important. However, feature extraction and classification methods of lightweight convolutional neural network expression recognition algorithms mostly used at present are not specifically and fully optimized for the characteristics of facial expression images, yet fail to make full use of the feature information in face expression images. To address the lack of facial expression recognition models that are both lightweight and effectively optimized for expression-specific feature extraction, this study proposes a novel network design tailored to the characteristics of facial expressions. In this paper, we refer to the backbone architecture of MobileNet V2 network, and redesign LightExNet, a lightweight convolutional neural network based on the fusion of deep and shallow layers, attention mechanism, and joint loss function, according to the characteristics of the facial expression features. In the network architecture of LightExNet, firstly, deep and shallow features are fused in order to fully extract the shallow features in the original image, reduce the loss of information, alleviate the problem of gradient disappearance when the number of convolutional layers increases, and achieve the effect of multi-scale feature fusion. The MobileNet V2 architecture has also been streamlined to seamlessly integrate deep and shallow networks. Secondly, by combining the own characteristics of face expression features, a new channel and spatial attention mechanism is proposed to obtain the feature information of different expression regions as much as possible for encoding. Thus improve the accuracy of expression recognition effectively. Finally, the improved center loss function is superimposed to further improve the accuracy of face expression classification results, and corresponding measures are taken to significantly reduce the computational volume of the joint loss function. In this paper, LightExNet is tested on the three mainstream face expression datasets: Fer2013, CK+ and RAF-DB, respectively, and the experimental results show that LightExNet has 3.27 M Parameters and 298.27 M Flops, and the accuracy on the three datasets is 69.17%, 97.37%, and 85.97%, respectively. The comprehensive performance of LightExNet is better than the current mainstream lightweight expression recognition algorithms such as MobileNet V2, IE-DBN, Self-Cure Net, Improved MobileViT, MFN, Ada-CM, Parallel CNN(Convolutional Neural Network), etc. Experimental results confirm that LightExNet effectively improves recognition accuracy and computational efficiency while reducing energy consumption and enhancing deployment flexibility. These advantages underscore its strong potential for real-world applications in lightweight facial expression recognition.
- Book Chapter
2
- 10.4018/979-8-3693-4143-8.ch007
- May 14, 2024
In the field of computer vision, facial expression recognition is an emerging field that looks at visual face data to try and understand human emotions. Facial expression detection and recognition has been popular recently in the research field. The literature is compiled from several credible studies that have been released in the last 10 years. In the recent years, the artificial intelligence has evolved a lot along with which there has been rise in experimenting with various methodologies for facial expression recognition, which has given promising results in accurately identifying and recognizing facial emotions from input modalities like images, text, facial expressions, and physiological signals. However, accurate analysis of basic emotions like anger, happiness, sadness, and fear remains a challenge. This chapter provides valuable insights for researchers interested in advancing facial emotion recognition using machine learning and deep learning techniques.
- Conference Article
4
- 10.1109/icbdie52740.2021.00059
- Apr 1, 2021
Facial expression recognition is a popular field of Computer Vision. Expression contains a wealth of human behavior information, and it expresses people's subtle emotional reactions and the corresponding psychological state. With the rapid development of Artificial Intelligence, it is used widely in daily human communication. Therefore, the development and innovation of facial expression recognition technology has also attracted more attention. The research of Deep Learning technology in the field of facial expression has also become a hot spot. Therefore, it is necessary to analyze the application of Artificial Intelligence technology in facial expression recognition. This paper analyzes facial expression recognition technology based on Artificial Intelligence in the field of education. Firstly, it summarizes the public dataset of expression and student expression classification; then it introduces the basic process and common methods of expression recognition based on Deep Learning; finally, it introduces the typical applications of facial expression recognition in the field of education.
- Conference Article
1
- 10.3390/ecerph-3-09109
- Jan 12, 2021
Throughout the globe a new infection named as coronavirus, that is spreading among human being very fast and intensely. Due to the fast spread of this virus since December 2019, the financial activities across the whole world are deteriorating. There was a lockdown in the whole world because of which the world’s biggest stock markets have collapsed. Unemployment in the whole world has increased in a large number and the trade between the countries stopped. To stop the spread of virus between person to person, the World Health Organization (WHO) has advised the people to adopt the home isolation. The main challenge in this pandemic is to identify the infected people from this virus. The present method which are commonly used are measuring of body temperature and doing blood test. However, body temperature detection and lab testing of the blood is complex and intrusive. The current challenge is to develop some technology to non-intrusively detect the suspected coronavirus patients at crowded places through the COVID alike symptoms of cough, sneezing and flu. Another, challenge to conduct the research on this area is the difficulty to obtain the data set due to limited number of patients to give their consent to be part of the research study. Looking at the efficacy of Artificial Intelligence (AI) in healthcare systems, it is a great challenge for the researchers to develop an AI algorithm which can assist health professionals and government officials to automatically identify and segregate the people having coronavirus symptoms such as cough and flu. Hence, this paper proposes a novel proof of concept system using ML-DCNNet to identify the Coronavirus infected people through facial expression (FE) recognition. The proposed algorithm takes the facial expressions of the people and identifies the facial expressions linked with normal health, cough, sneezing and flu. The data of the facial expressions have been collected through market places, medical clinics and quarantine centers in India. The working of the developed algorithm has been divided into dual stages, at the first stage, the suspected COVID infected patients are classified using Expression-Net on the basis of FEs and in the second stage, intensity level is checked using Intensity-Net to segregate the suspected people with cough, sneezing and flu symptoms. The proposed prototype of ML-DCNN is used to measure the people infected with COVID-19 with their symptoms intensity estimation has been carried out by using the COVID-19 datasets. The proposed system will act as a COVID alert system about the presence of suspected Coronavirus infected people with symptoms of cough, sneezing and flue. It is the first kind of study to analyze the facial expressions and behavioural measures (coughing, sneezing, flu and hand movements). This is study is a proof of concept which can be viable solution in future to detect the suspected COVID patients. However, this needs to be tested on larger dataset. It has been foreseen that the proposed method will demonstrate a distinguished performance as contrast to the situation of the skill methods being used currently.
- Research Article
- 10.37349/emed.2025.1001370
- Nov 12, 2025
- Exploration of Medicine
Background: Although accurate pain assessment is crucial in clinical care, pain evaluation is traditionally based on self-report or observer-based scales. Artificial intelligence (AI) applied to facial expression recognition is promising for objective, automated, and real-time pain assessment. Methods: The study followed PRISMA guidelines. We searched PubMed/MEDLINE, Scopus, Web of Science, Cochrane Library, and the IEEE Xplore databases for the literature published between 2015 and 2025 on the applications of AI for pain assessment via facial expression analysis. Eligible studies included original articles in English applying different AI techniques. Exclusion criteria were neonatal/pediatric populations, non-facial approaches, reviews, case reports, letters, and editorials. Methodological quality was assessed using the RoB 2 tool (for RCTs) and adapted appraisal criteria for AI development studies. This systematic review was registered in PROSPERO (https://doi.org/10.17605/OSF.IO/N9PZA). Results: A total of 25 studies met the inclusion criteria. Sample sizes ranged from small experimental datasets (n < 30) to larger clinical datasets (n > 500). AI strategies included machine learning models, convolutional neural networks (CNNs), recurrent neural networks such as long short-term memory (LSTM), transformers, and multimodal fusion models. The accuracy in pain detection varied between ~70% and > 90%, with higher performance observed in deep learning and multimodal frameworks. The risk of bias was overall moderate, with frequent concerns related to small datasets and lack of external validation. No meta-analysis was performed due to heterogeneity in datasets, methodologies, and outcome measures. Discussion: AI-based facial expression recognition shows promising accuracy for automated pain assessment, particularly in controlled settings and binary classification tasks. However, evidence remains limited by small sample sizes, methodological heterogeneity, and scarce external validation. Large-scale multicenter studies are required to confirm clinical applicability and to strengthen the certainty of evidence for use in diverse patient populations.
- Research Article
- 10.1051/itmconf/20257302036
- Jan 1, 2025
- ITM Web of Conferences
Facial expressions, as a vital conduit for human emotional expression, are among the most observable features of machines in the field of computer vision. Consequently, facial expression recognition holds broad potential for applications in artificial intelligence and health monitoring, among others. Given the diversity and complexity of expressions, the development of efficient and accurate models for expression recognition is of significant importance. This paper systematically reviews the foundational knowledge and related research in facial expression recognition, analyzing the application of current primary models in expression recognition. Employing a combination of literature review and experimental analysis, this study evaluates existing facial expression recognition algorithms. Special attention is given to advanced models based on Convolutional Neural Networks (CNNs), with a detailed comparison of their architectures and characteristics, analyzing their performance under various conditions. The paper concludes with a summary of the latest advancements in the field of facial expression recognition and proposes potential directions for future research.
- Research Article
88
- 10.1177/070674370505000905
- Aug 1, 2005
- The Canadian Journal of Psychiatry
Impaired facial expression recognition in schizophrenia patients contributes to abnormal social functioning and may predict functional outcome in these patients. Facial expression processing involves individual neural networks that have been shown to malfunction in schizophrenia. Whether these patients have a selective deficit in facial expression recognition or a more global impairment in face processing remains controversial. To investigate whether patients with schizophrenia exhibit a selective impairment in facial emotional expression recognition, compared with patients with major depression and healthy control subjects. We studied performance in facial expression recognition and facial sex recognition paradigms, using original morphed faces, in a population with schizophrenia (n=29) and compared their scores with those of depression patients (n=20) and control subjects (n=20). Schizophrenia patients achieved lower scores than both other groups in the expression recognition task, particularly in fear and disgust recognition. Sex recognition was unimpaired. Facial expression recognition is impaired in schizophrenia, whereas sex recognition is preserved, which highly suggests an abnormal processing of changeable facial features in this disease. A dysfunction of the top-down retrograde modulation coming from limbic and paralimbic structures on visual areas is hypothesized.
- Dissertation
- 10.25904/1912/4371
- Oct 20, 2021
Understanding customer experience in real-time can potentially support people’s safety and comfort while in public spaces. Existing techniques, such as surveys and interviews, can only analyse data at specific times. Therefore, organisations that manage public spaces, such as local government or business entities, cannot respond immediately when urgent actions are needed. Manual monitoring through surveillance cameras can enable organisation personnel to observe people. However, fatigue and human distraction during constant observation cannot ensure reliable and timely analysis. Artificial intelligence (AI) can automate people observation and analyse their movement and any related properties in real-time. Analysing people’s facial expressions can provide insight into how comfortable they are in a certain area, while analysing crowd density can inform us of the area’s safety level. By observing the long-term patterns of crowd density, movement, and spatial data, the organisation can also gain insight to develop better strategies for improving people’s safety and comfort. There are three challenges to making an AI-enabled video surveillance system work well in public spaces. First is the readiness of AI models to be deployed in public space settings. Existing AI models are designed to work in generic/particular settings and will suffer performance degradation when deployed in a real-world setting. Therefore, the models require further development to tailor them for the specific environment of the targeted deployment setting. Second is the inclusion of AI continual learning capability to adapt the models to the environment. AI continual learning aims to learn from new data collected from cameras to adapt the models to constant visual changes introduced in the setting. Existing continuous learning approaches require long-term data retention and past data, which then raise data privacy issues. Third, most of the existing AI-enabled surveillance systems rely on centralised processing, meaning data are transmitted to a central/cloud machine for video analysis purposes. Such an approach involves data privacy and security risks. Serious data threats, such as data theft, eavesdropping or cyberattack, can potentially occur during data transmission. This study aims to develop an AI-enabled intelligent video surveillance system based on deep learning techniques for public spaces established on responsible AI principles. This study formulates three responsible AI criteria, which become the guidelines to design, develop, and evaluate the system. Based on the criteria, a framework is constructed to scale up the system over time to be readily deployed in a specific real-world environment while respecting people’s privacy. The framework incorporates three AI learning approaches to iteratively refine the AI models within the ethical use of data. First is the AI knowledge transfer approach to adapt existing AI models from generic deployment to specific real-world deployment with limited surveillance datasets. Second is the AI continuous learning approach to continuously adapt AI models to visual changes introduced by the environment without long-period data retention and the need for past data. Third is the AI federated learning approach to limit sensitive and identifiable data transmission by performing computation locally on edge devices rather than transmitting to the central machine. This thesis contributes to the study of responsible AI specifically in the video surveillance context from both technical and non-technical perspectives. It uses three use cases at an international airport as the application context to understand passenger experience in real-time to ensure people’s safety and comfort. A new video surveillance system is developed based on the framework to provide automated people observation in the application context. Based on real deployment using the airport’s selected cameras, the evaluation demonstrates that the system can provide real-time automated video analysis for three use cases while respecting people’s privacy. Based on comprehensive experiments, AI knowledge transfer can be an effective way to address limited surveillance datasets issue by transferring knowledge from similar datasets rather than training from scratch on surveillance datasets. It can be further improved by incrementally transferring knowledge from multi-datasets with smaller gaps rather than a one-stage process. Learning without Forgetting is a viable approach for AI continuous learning in the video surveillance context. It consistently outperforms fine-tuning and joint-training approaches with lower data retention and without the need for past data. AI federated learning can be a feasible solution to allow continuous learning in the video surveillance context without compromising model accuracy. It can obtain comparable accuracy with quicker training time compared to joint-training.
- Research Article
22
- 10.1038/s41598-022-14981-6
- Jun 23, 2022
- Scientific Reports
This work focuses on facial processing, which refers to artificial intelligence (AI) systems that take facial images or videos as input data and perform some AI-driven processing to obtain higher-level information (e.g. a person’s identity, emotions, demographic attributes) or newly generated imagery (e.g. with modified facial attributes). Facial processing tasks, such as face detection, face identification, facial expression recognition or facial attribute manipulation, are generally studied as separate research fields and without considering a particular scenario, context of use or intended purpose. This paper studies the field of facial processing in a holistic manner. It establishes the landscape of key computational tasks, applications and industrial players in the field in order to identify the 60 most relevant applications adopted for real-world uses. These applications are analysed in the context of the new proposal of the European Commission for harmonised rules on AI (the AI Act) and the 7 requirements for Trustworthy AI defined by the European High Level Expert Group on AI. More particularly, we assess the risk level conveyed by each application according to the AI Act and reflect on current research, technical and societal challenges towards trustworthy facial processing systems.
- Abstract
- 10.1093/schbul/sbaa030.315
- May 1, 2020
- Schizophrenia Bulletin
BackgroundA history of Childhood Trauma (CT), i.e., physical or emotional abuse or neglect, and sexual abuse, is reportedly more prevalent in individuals suffering from psychosis than in the general population. Crucial questions remain unclear about the nature of interpersonal functioning in CT survivors, involving the capacity to understand and interpret other people′s thoughts and feelings, especially in individuals with First-Episode of Schizophrenia (FESz). We investigated the Theory of Mind (ToM) performance of patients with FESz related to CT in comparison to healthy controls (HC).MethodsParticipants (n=77) completed the Eye Task Revised (RMET) and the Childhood Experience of Care Abuse Questionnaire (CECA-Q). The Word Accentuation Test (TAP) was used to estimate a premorbid IQ. Seven-teen patients with FESz (Mean age = 24.9, SD = 5.4, Male = 79.6%; Education = 10.7, SD = 1.5 years) were recruited at the First-Episode Psychosis Program, Hospital 12 de Octubre Madrid, and 60 HC (Mean age = 27.6, SD = 7.2; Male = 45.6%; Education = 14.5, SD = 2.8 years) were healthy volunteers. Between-group comparisons were made using ANCOVA, considering group and CT as fixed factors. Age, years of education and IQ were included as covariates.ResultsPreliminary results showed that compared to controls, patients with FESz performed worse on the recognition and interpretation of facial expressions, in both male and female faces (p < .001). Patients with FESz did not perform differently than HC in the recognition and interpretation of positive facial expressions (p = .074). However, lower interpretation of negative facial expressions (p < .001) and of neutral facial expressions (p < .001) was shown in patients with FESz compared to HC. Higher interpretation of facial expressions was shown in FESz patients with CT (n = 12), only of female faces (p < .001), compared to patients without CT (n = 7). It was also shown higher interpretation of facial expressions in HC with CT (n = 28), only of negative facial expressions (p = .014), compared to HC without CT (n = 32). Female patients with FESz performed worse on the recognition and interpretation of negative (p = .024) and neutral faces (p < .001), only of male faces (p = .038), compared to female HC. Male patients with FESz performed worse on the recognition and interpretation of positive (p = .038) and negative facial expressions (p = .001) of male faces (p < .001), compared to male HC. In comparison to male FESz patients without CT, male FESz patients with CT showed higher interpretation of female faces (p = .030). In comparison to male HC without CT, male HC with CT showed higher interpretation of male faces (p = .031).DiscussionAccording to previous research, our preliminary findings indicated theory of mind deficits in patients with FESz. Interestingly, in our study the alterations on the interpretation and recognition of facial expressions were shown only of negative and neutral, but not of positive facial expressions. Furthermore, and contrary to literature, we found more interpretation and recognition of facial expressions in patients and healthy controls survivors of CT. However, the above-mentioned was specifically observed of female faces in patients and of negative facial expressions in healthy controls. In addition, female and male patients and healthy controls seem to interpret differently facial expressions related to childhood trauma. Nevertheless, increasing our sample size would give us the opportunity to draw further conclusions about how adverse experiences during childhood may influence social abilities in patients with FESz.
- Research Article
53
- 10.1002/ejp.1948
- Apr 6, 2022
- European Journal of Pain
Pain intensity evaluation by self-report is difficult and biased in non-communicating people, which may contribute to inappropriate pain management. The use of artificial intelligence (AI) to evaluate pain intensity based on automated facial expression analysis has not been evaluated in clinical conditions. We trained and externally validated a deep-learning system (ResNet-18 convolutional neural network) to identify and classify 2810 facial expressions of 1189 patients, captured before and after surgery, according to their self-reported pain intensity using numeric rating scale (NRS, 0-10). AI performances were evaluated by accuracy (concordance between AI prediction and patient-reported pain intensity), sensitivity and specificity to diagnose pain ≥4/10 and ≥7/10. We then confronted AI performances with those of 33 nurses to evaluate pain intensity from facial expression in the same situation. In the external testing set (120 face images), the deep learning system was able to predict exactly the pain intensity among the 11 possible scores (0-10) in 53% of the cases with a mean error of 2.4 points. Its sensitivities to detect pain ≥4/10 and ≥7/10 were 89.7% and 77.5%, respectively. Nurses estimated the right NRS pain intensity with a mean accuracy of 14.9% and identified pain ≥4/10 and ≥7/10 with sensitivities of 44.9% and 17.0%. Subject to further improvement of AI performances through further training, these results suggest that AI using facial expression analysis could be used to assist physicians to evaluate pain and detect severe pain, especially in people not able to report appropriately their pain by themselves. These original findings represent a major step in the development of a fully automated, rapid, standardized and objective method based on facial expression analysis to measure pain and detect severe pain.
- Research Article
4
- 10.1515/revneuro-2024-0125
- Jan 21, 2025
- Reviews in the neurosciences
The recognition and classification of facial expressions using artificial intelligence (AI) presents a promising avenue for early detection and monitoring ofneurodegenerative disorders. This narrative review critically examines the current state of AI-driven facial expression analysis in the context of neurodegenerative diseases, such as Alzheimer's and Parkinson's. We discuss thepotential of AI techniques, including deep learning andcomputer vision, to accurately interpret and categorize subtle changes in facial expressions associated with thesepathological conditions. Furthermore, we explore theroleof facial expression recognition as a noninvasive, cost-effective tool for screening, disease progression tracking, and personalized intervention in neurodegenerative disorders. The review also addresses the challenges, ethical considerations, and future prospects of integrating AI-based facial expression analysis into clinical practice forearly intervention and improved quality of life for individuals at risk ofor affected by neurodegenerative diseases.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.