• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Facial Expressions Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
30547 Articles

Published in last 50 years

Related Topics

  • Changes In Facial Expression
  • Changes In Facial Expression
  • Facial Expression Recognition
  • Facial Expression Recognition
  • Facial Expression Classification
  • Facial Expression Classification
  • Facial Expression Analysis
  • Facial Expression Analysis
  • Expression Recognition
  • Expression Recognition
  • Neutral Expressions
  • Neutral Expressions
  • Facial Movements
  • Facial Movements

Articles published on Facial Expressions

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
29145 Search results
Sort by
Recency
Adaptive Firefly Optimization Based Feature Selection and Ensemble Machine Learning Algorithm for Facial Expression Emotion Recognition

A person's emotional state can be determined from their facial expression emotion recognition (FEER). Rich emotional information can be found in FEER. One of the most crucial types of interpersonal communication is FEER. Finding computational methods to replicate facial emotion expression in a similar or identical manner remains an unresolved issue, despite the fact that it is a skill that humans naturally do. To overcome the problem, in this work, Adaptive Firefly Optimization (AFO) and Ensemble (ML) Machine Learning (EML) algorithm is proposed for FEER. In this work, initially, dataset is collected using CK+ database and KMU-FED database. In occlusion generation, occlusions around mouths and eyes are duplicated. When calculating the optical flow, we aim to preserve as much information as possible with normalized inputs that deep networks require for recognitions and reconstructions. The reconstruction is done by using Deep Q-learning (DQL) which is used for semantic segmentation (SS) based on occlusions. For Feature selection (FS), the AFO algorithm is used. From the provided database, AFO is utilised to choose more pertinent and redundant features. It generates best fitness values (FV) using objective function (OF) for higher recognition accuracy (ACC). EML algorithms including the K-Nearest Neighbour (KNN), Random Forest (RF), and Enhanced Artificial Neural Network (EANN) are used to execute FEER. EML provides faster convergence time during training and testing process. It is mainly used to classify the accurate FEER results for the given database. According to the results, the suggested AFO-EML method overtakes the current techniques by ACC, precision (P), recall (R), and f-measure.

Read full abstract
  • Journal IconJournal of Machine and Computing
  • Publication Date IconJul 5, 2025
  • Author Icon Sudha S S + 1
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Non-contact map interaction configuration optimization

ABSTRACT Current map interaction operations primarily rely on the use of a mouse, keyboard, or touchscreen, which are not convenient for special scenarios such as driving or cycling, or for specific populations such as individuals with hand disabilities or ALS patients. Therefore, this paper explores non-contact map interaction methods that do not require hand involvement. We conducted a comparative analysis of user preferences for several common non-contact methods (eye tracking control, voice control, head movement control, and facial expression control) in basic map operations such as zoom-in, zoom-out, panning, map toggle, location search, and route planning. Based on expert prior knowledge, we pre-evaluated various non-contact interaction methods and eliminated those with poor evaluations, retaining eye tracking and voice control methods that scored higher. Subsequently, we conducted user research through practical feedback and achieved the optimal configuration for non-contact map interaction based on questionnaire statistical analysis results. Finally, we evaluated the map visualization interaction system under the optimal configuration mode using SUS (System Usability Scale), and the results showed high usability and an excellent user interaction experience. This research has reference value for the development of non-contact map visualization systems.

Read full abstract
  • Journal IconCartography and Geographic Information Science
  • Publication Date IconJul 4, 2025
  • Author Icon Nai Yang + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Multimodal Feature-Guided Audio-Driven Emotional Talking Face Generation

Audio-driven emotional talking face generation aims to generate talking face videos with rich facial expressions and temporal coherence. Current diffusion model-based approaches predominantly depend on either single-label emotion annotations or external video references, which often struggle to capture the complex relationships between modalities, resulting in less natural emotional expressions. To address these issues, we propose MF-ETalk, a multimodal feature-guided method for emotional talking face generation. Specifically, we design an emotion-aware multimodal feature disentanglement and fusion framework that leverages Action Units (AUs) to disentangle facial expressions and models the nonlinear relationships among AU features using a residual encoder. Furthermore, we introduce a hierarchical multimodal feature fusion module that enables dynamic interactions among audio, visual cues, AUs, and motion dynamics. This module is optimized through global motion modeling, lip synchronization, and expression subspace learning, enabling full-face dynamic generation. Finally, an emotion-consistency constraint module is employed to refine the generated results and ensure the naturalness of expressions. Extensive experiments on the MEAD and HDTF datasets demonstrate that MF-ETalk outperforms state-of-the-art methods in both expression naturalness and lip-sync accuracy. For example, it achieves an FID of 43.052 and E-FID of 2.403 on MEAD, along with strong synchronization performance (LSE-C of 6.781, LSE-D of 7.962), confirming the effectiveness of our approach in producing realistic and emotionally expressive talking face videos.

Read full abstract
  • Journal IconElectronics
  • Publication Date IconJul 2, 2025
  • Author Icon Xueping Wang + 5
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Novel Loss of Function Variant in SOST From Chinese Family Results in Sclerosteosis 1

ABSTRACTBackgroundSOST encodes a secreted glycoprotein that is similar in sequence to the differential screening‐selected gene aberrative in neuroblastoma (DAN) family of bone morphogenetic protein (BMP) antagonists. Pathogenic variants in the SOST gene result in sclerosteosis, van Buchem disease (VBD), or craniodiaphyseal dysplasia. SOST‐related genetic disorders are very rare, and limited studies have reported variants associated with sclerosteosis.MethodsClinical tests such as magnetic resonance imaging (MRI), computed tomography (CT), emission computed tomography (ECT), electromyogram (EMG), routine blood tests, and physical examinations were conducted for the proband. Trio‐whole exome sequencing (Trio‐WES) was performed, and the rare variants (allele frequency < 0.01) in the exon and splicing regions were selected for further pathogenic evaluation. Candidate pathogenic variants were validated through Sanger sequencing. The wild and mutant SOST sequences were cloned into the pcDNA3.1 expression vector, and the RNA and protein expression levels were investigated in the HEK293T cell line.ResultsIn this study, we present a case study of a proband who displays abnormal facial expressions accompanied by numbness. The results of the brain MRI show thickening of the skull and disappearance of the diplopia signal. The temporal bone CT scan indicates diffuse osteosclerosis affecting the bilateral ossicular chains and internal auditory meatus, as well as stenosis of the bilateral internal auditory meatus. Trio‐WES sequencing detected a novel homozygous variant in the proband: NM_025237.3(SOST): c.327C>A (p.Cys109*), which was also validated in his sister from the same family. According to the ACMG guidelines, the variant is classified as “likely pathogenic.” The in vitro experiments demonstrated that the variant caused a decrease in SOST expression at RNA and protein level and produced a truncated protein.ConclusionThe report presents new evidence for the clinical diagnosis of SOST‐related facial numbness and expands the variant spectrum of SOST.

Read full abstract
  • Journal IconMolecular Genetics & Genomic Medicine
  • Publication Date IconJul 2, 2025
  • Author Icon Yufan Guo + 10
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

The influence of facial expression absence on the recognition of different emotions: Evidence from behavioral and event-related potentials studies.

The influence of facial expression absence on the recognition of different emotions: Evidence from behavioral and event-related potentials studies.

Read full abstract
  • Journal IconBiological psychology
  • Publication Date IconJul 1, 2025
  • Author Icon Juan Song + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Day-to-day dynamics of facial emotion expressions in posttraumatic stress disorder.

Day-to-day dynamics of facial emotion expressions in posttraumatic stress disorder.

Read full abstract
  • Journal IconJournal of affective disorders
  • Publication Date IconJul 1, 2025
  • Author Icon Whitney R Ringwald + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Multimodal depression recognition and analysis: Facial expression and body posture changes via emotional stimuli.

Multimodal depression recognition and analysis: Facial expression and body posture changes via emotional stimuli.

Read full abstract
  • Journal IconJournal of affective disorders
  • Publication Date IconJul 1, 2025
  • Author Icon Yang Liu + 8
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

“Welcoming with a smile”: how anchors’ facial expressions drive consumer purchasing behavior

PurposeThe relationship between anchors and consumers in livestreaming environments is a significant area of research, with most prior studies focusing on anchor-specific characteristics. Based on emotional contagion and parasocial interaction theories, this study examines the relationship between anchor facial expressions and consumer purchasing behavior. Our research aims to identify a more widely applicable phenomenon and explain the mechanisms behind this relationship, contributing to theoretical development and offering practical insights into livestreaming platforms and anchors.Design/methodology/approachData were gathered from the Douyin livestreaming platform at varying traffic levels, with anchor facial expressions identified through an object detection model, resulting in a dataset of 22,406 entries. An ordinary least squares regression analysis and multiple robustness checks were performed to evaluate the proposed hypotheses.FindingsThe results indicate that anchors’ positive facial expressions significantly enhance consumer purchasing behavior mediated by parasocial interaction and relationships. Notably, anchor influence negatively moderates this effect; the impact of facial expressions on consumer behavior is weaker for highly influential anchors and stronger for less influential ones.Originality/valueThis study optimizes and categorizes facial expression classifications that adapt to livestreaming contexts, examines how anchors’ positive emotional facial expressions promote consumer purchasing behavior and explores the underlying mechanisms. This expands the research perspective on the role of visual cues in livestreaming and investigates the application of object detection models in this field, offering valuable insights from both theoretical and practical perspectives.

Read full abstract
  • Journal IconAslib Journal of Information Management
  • Publication Date IconJul 1, 2025
  • Author Icon Xiaoping Sheng + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Efficacy evaluation and facial expressions biomarker of light therapy in youths with subthreshold depression: A randomized control trial study.

Efficacy evaluation and facial expressions biomarker of light therapy in youths with subthreshold depression: A randomized control trial study.

Read full abstract
  • Journal IconJournal of affective disorders
  • Publication Date IconJul 1, 2025
  • Author Icon Xin Chen + 23
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Aging alters face expressions processing and recognition: insights on possible neural mechanisms.

The present work investigated how aging influences the different stages of face expressions processing: fixation patterns, early perception, face motor response and recognition. Thirty-four participants (17 young, 17 senior) were subjected to i) recording of fixation patterns, ii) recording of the P100 and the N170 components of event-related potentials, iii) excitability of short intracortical inhibition (SICI) and intracortical facilitation (ICF) of the face primary motor cortex (face M1) , and iv) recognition task during the passive viewing of neutral, happy and sad faces expressions. senior subjects mostly looked at the mouth, had reduced pupil size and delayed N170 latency, regardless of expression, compared to young ones; and a reduced P100 amplitude when viewing sad faces. Senior subjects' excitability of face M1 (was enhanced compared to the young group; both groups had a reduced SICI when viewing happy faces, but only senior subjects exhibited reduced SICI for sad faces. Young subjects had better recognition accuracy and response times than senior ones, particularly for sad expressions. When viewing sad expressions, SICI was negatively correlated with pupil size and recognition accuracy, and positively correlated with N170 latency. Data suggested that aging reduces visual attention for sad faces which appears to be connected to an increased excitability of face M1, which in turn is linked to their impaired recognition skills, especially when processing negative face expressions. These findings prove new insights in the comprehension of how aging affects cognitive functions and the process of face expressions recognition.

Read full abstract
  • Journal IconJournal of neurophysiology
  • Publication Date IconJul 1, 2025
  • Author Icon Francesca Ginatempo + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Dog facial landmarks detection and its applications for facial analysis

Automated analysis of facial expressions is a crucial challenge in the emerging field of animal affective computing. One of the most promising approaches in this context is facial landmarks, which are well-studied for humans and are now being adopted for many non-human species. The scarcity of high-quality, comprehensive datasets is a significant challenge in the field. This paper is the first to present a novel Dog Facial Landmarks in the Wild (DogFLW) dataset containing 3732 images of dogs annotated with facial landmarks and bounding boxes. Our facial landmark scheme has 46 landmarks grounded in canine facial anatomy, the Dog Facial Action Coding System (DogFACS), and informed by existing cross-species landmarking methods. We additionally provide a benchmark for dog facial landmarks detection and demonstrate two case studies for landmark detection models trained on the DogFLW. The first is a pipeline using landmarks for emotion classification from dog facial expressions from video, and the second is the recognition of DogFACS facial action units (variables), which can enhance the DogFACS coding process by reducing the time needed for manual annotation. The DogFLW dataset aims to advance the field of animal affective computing by facilitating the development of more accurate, interpretable, and scalable tools for analysing facial expressions in dogs with broader potential applications in behavioural science, veterinary practice, and animal-human interaction research.

Read full abstract
  • Journal IconScientific Reports
  • Publication Date IconJul 1, 2025
  • Author Icon George Martvel + 5
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A Review of Methods for Unobtrusive Measurement of Work-Related Well-Being

Work-related well-being is an important research topic, as it is linked to various aspects of individuals’ lives, including job performance. To measure it effectively, unobtrusive sensors are desirable to minimize the burden on employees. Because there is a lack of consensus on the definitions of well-being in the psychological literature in terms of its dimensions, our work begins by proposing a conceptualization of well-being based on the refined definition of health provided by the World Health Organization. We focus on reviewing the existing literature on the unobtrusive measurement of well-being. In our literature review, we focus on affect, engagement, fatigue, stress, sleep deprivation, physical comfort, and social interactions. Our initial search resulted in a total of 644 studies, from which we then reviewed 35, revealing a variety of behavioral markers such as facial expressions, posture, eye movements, and speech. The most commonly used sensory devices were red, green, and blue (RGB) cameras, followed by microphones and smartphones. The methods capture a variety of behavioral markers, the most common being body movement, facial expressions, and posture. Our work serves as an investigation into various unobtrusive measuring methods applicable to the workplace context, aiming to foster a more employee-centric approach to the measurement of well-being and to emphasize its affective component.

Read full abstract
  • Journal IconMachine Learning and Knowledge Extraction
  • Publication Date IconJul 1, 2025
  • Author Icon Zoja Anžur + 10
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Behaviors, strategies, and interactions in cross training (CrossFit®): A qualitative analysis of competitive dynamics

Cross Training (CrossFit®) has gained popularity as a training modality that combines various sports disciplines, promoting physical development and social interaction. This qualitative study of mixed design (ethnographic and phenomenological) analyzed athletes' behaviors, strategies, and interactions in a CrossFit competition. Seventeen athletes (10 men and 7 women) aged 19 to 28 participated. Data was collected through non-participant observation, focusing on behaviors, facial expressions, and body language in three moments: before, during, and after the competition. In addition, five of them were interviewed through semi-structured interviews. The inductive thematic analysis allowed the identification of patterns and emerging categories. The results revealed that athletes employ preparation and recovery strategies, such as hydration and stretching, and experience significant emotional reactions. In the pre-competition phase, nervousness and anxiety were mitigated by social support and camaraderie. During the competition, athletes who adopted collaborative approaches showed superior performance, highlighting the importance of teamwork. In the post-competition phase, social interactions and the festive atmosphere fostered reflection and emotional recovery. These findings highlight the importance of physical and mental preparation and social support in sports performance. The research provides a comprehensive view of competitive dynamics in CrossFit, with practical applications for coaches and event organizers. It is suggested to structure training programs that include effective recovery practices and promote a supportive environment, contributing to the integral development of athletes.

Read full abstract
  • Journal IconSportis. Scientific Journal of School Sport, Physical Education and Psychomotricity
  • Publication Date IconJul 1, 2025
  • Author Icon Luis A Cardozo + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Non-contact detection of mental fatigue from facial expressions and heart signals: A self-supervised-based multimodal fusion method

Non-contact detection of mental fatigue from facial expressions and heart signals: A self-supervised-based multimodal fusion method

Read full abstract
  • Journal IconBiomedical Signal Processing and Control
  • Publication Date IconJul 1, 2025
  • Author Icon Shengjian Hu + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Automatic pain classification in older patients with hip fracture based on multimodal information fusion

Given the limitations of unimodal pain recognition approaches, this study aimed to develop a multimodal pain recognition system for older patients with hip fractures using multimodal information fusion. The proposed system employs ResNet-50 for facial expression analysis and a VGG-based (VGGish) network for audio-based pain recognition. A channel attention mechanism was incorporated to refine feature representations and enhance the model’s ability to distinguish between different pain levels. The outputs of the two unimodal systems were then integrated using a weighted-sum fusion strategy to create a unified multimodal pain recognition model. A self-constructed multimodal pain dataset was used for model training and validation, with the data split in an 80:20 ratio. Final testing was conducted using the BioVid Heat Pain Database. The VGGish model, optimized by a LSTM network and the channel attention mechanism, was trained on a hip fracture pain dataset, and the accuracy of the model was maintained at 80% after 500 iterations. The model was subsequently tested on the BioVid heat pain database, Pain grades 2 to 4. The confusion matrix test indicated an accuracy of 85% for Pain grade 4. This study presents the first clinically validated multimodal pain recognition system that integrates facial expression and speech data. The results demonstrate the feasibility and effectiveness of the proposed approach in real-world clinical environments.

Read full abstract
  • Journal IconScientific Reports
  • Publication Date IconJul 1, 2025
  • Author Icon Shuang Yang + 8
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Improving EEG based brain computer interface emotion detection with EKO ALSTM model

Decoding signals from the CNS brain activity is done by a computer-based communication device called a BCI. In contrast, the system is considered compelling communication equipment enabling command, communication, and action without using neuromuscular or muscle channels. Various techniques for automatic emotion identification based on body language, speech, or facial expressions are nowadays in use. However, the monitoring of exterior emotions, which are easily manipulated, limits the applicability of these procedures. EEG-based emotion detection research might yield significant benefits for enhancing BCI application performance and user experience. To overcome these issues, this study proposed a novel EKO-ALSTM for emotion detection in EEG-based brain–computer interfaces. The proposed study comprises EEG-based signals that record the electrical activity of the brain connected to various emotional states, which are gathered as real-time acquired EEG signals for emotion detection. The data was pre-processed using a bandpass filter to remove unwanted frequency noise for the obtained data. Then, feature extraction is performed using DWT from pre-processed data. Specifically, the proposed approach is implemented using Python software. The proposed system and existing algorithms are compared using a variety of evaluation criteria, including specificity, F1 score, accuracy, recall or sensitivity, and positive predictive values or precision. The results demonstrated that the proposed method achieved better performance in EEG-based BCI emotion detection with an accuracy of 97.93%, a positive predictive value of 96.24%, a sensitivity of 97.81%, and a specificity of 97.75%. This study emphasizes that innovative approaches have significantly increased the accuracy of emotion identification when applied to EEG-based emotion recognition systems. Additionally, the findings suggest that integrating advanced machine learning techniques can further enhance the effectiveness and reliability of these systems in real-world applications, paving the way for more responsive and intuitive BCI technologies.

Read full abstract
  • Journal IconScientific Reports
  • Publication Date IconJul 1, 2025
  • Author Icon R Kishore Kanna + 8
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Experimental Modeling of Face Emotion Recognition Using Machine Learning Classification (SVM, KNN, Random Forest) and Deep Learning CNN

Facial Emotion Recognition (FER) is a technology that analyzes facial expressions to detect emotions, playing a growing role in psychology and Human-Computer Interaction. In Indonesia, mental health issues are rising, with emotional disorders increasing from 6.0% in 2013 to 9.8% in 2018. Over 19 million people aged 15+ were affected in 2018, a number likely worsened by the COVID-19 pandemic. Given the urgency of early detection, FER offers a non-invasive method to help identify mental health issues. It can support timely intervention and promote psychological well-being, especially in under-resourced settings. This study compares several Machine Learning (ML) and Deep Learning (DL) models—SVM, K-Nearest Neighbor, Random Forest, and Convolutional Neural Networks (CNN)—to classify facial emotions. The dataset used is the Facial Expression Recognition dataset by Jonathan Oheix from Kaggle. Images were preprocessed and used to train and evaluate each model. Traditional ML models relied on extracted features, while CNN learned features directly from images. Results show that CNN achieved the highest accuracy among the tested models. This suggests that FER, especially with CNN, can be a useful tool for early detection of emotional disorders in mental health contexts.

Read full abstract
  • Journal IconTeknika
  • Publication Date IconJul 1, 2025
  • Author Icon Shane Ardyanto Baskara + 1
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Multiscale wavelet attention convolutional network for facial expression recognition

Deep learning techniques, particularly Convolutional Neural Networks (CNNs), have been widely recognized as effective tools for facial expression recognition applications. The accuracy of facial expression recognition application requires further enhancement. Main work and effects of this study are as follows: First, the first convolutional layer of CNN is substituted with a Multi-scale Convolutional (MsC) layer, resulting in the proposal of the Multi-scale CNN (MCNN). Experimental results indicate that MCNN achieves an average accuracy improvement of 1.339% over CNN. Second, a wavelet Channel Attention (wCA) mechanism is incorporated after the first pooling layer of CNN, leading to the proposal of the wCA-based CNN (wCA-CNN). Experimental results demonstrate that wCA-CNN achieves an average accuracy improvement of 1.414% over CNN. Third, by substituting the first convolutional layer of the CNN with the MsC layer and incorporating wCA mechanism after the first pooling layer, the wCA-based Multi-scale CNN (wCA-MCNN) is introduced. Experimental results reveal that wCA-MCNN achieves an average accuracy improvement of 2.921% compared to CNN. Fourth, the Residual Network (ResNet18) is selected as a baseline model and improved accordingly. Compared to ResNet18, the accuracy of the proposed MsC-ResNet18, wCA-ResNet18, and MsC-wCA-ResNet18 improved by 0.845%, 0.835%, and 1.810%, respectively. Fifth, all the above proposed methods are evaluated by two datasets: the Facial Expression of Students in Real-Class (FESR) dataset collected from our real classroom and the Karolinska Directed Emotional Faces (KDEF) dataset.

Read full abstract
  • Journal IconScientific Reports
  • Publication Date IconJul 1, 2025
  • Author Icon Jing-Wei Liu + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Robustness of facial expression recognition systems in the presence of mild and uniform blur

Robustness of facial expression recognition systems in the presence of mild and uniform blur

Read full abstract
  • Journal IconMultimedia Tools and Applications
  • Publication Date IconJul 1, 2025
  • Author Icon Naveen Kumar H N + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Application of deep learning-based facial pain recognition model for postoperative pain assessment.

Application of deep learning-based facial pain recognition model for postoperative pain assessment.

Read full abstract
  • Journal IconJournal of clinical anesthesia
  • Publication Date IconJul 1, 2025
  • Author Icon Ji-Tuo Zhang + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers