Articles published on emotion-recognition
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
15368 Search results
Sort by Recency
- New
- Research Article
- 10.3390/electronics14244833
- Dec 8, 2025
- Electronics
- Changliang Zheng + 3 more
Electroencephalogram (EEG)-based emotion recognition has emerged as a key enabler for semantic communication systems in next-generation networks (5G-Advanced/6G), where the goal is to transmit task-relevant semantic information rather than raw signals. However, domain adaptation approaches for EEG emotion recognition typically assume closed-set label spaces and fail when unseen emotional classes arise, leading to negative transfer and degraded semantic fidelity. To address this challenge, we propose a Coarse-to-Fine Open-set Domain Adaptation (C2FDA) framework, which aligns with the semantic communication paradigm by extracting and transmitting only the emotion-related semantics necessary for task performance. C2FDA integrates a cognition-inspired spatio-temporal graph encoder with a coarse-to-fine sample separation pipeline and instance-weighted adversarial alignment. The framework distinguishes between known and unknown emotional states in the target domain, ensuring that only semantically relevant information is communicated, while novel states are flagged as unknown. Experiments on SEED, SEED-IV, and SEED-V datasets demonstrate that C2FDA achieves superior open-set adaptation performance, with average accuracies of 41.5% (SEED → SEED-IV), 42.6% (SEED → SEED-V), and 48.9% (SEED-IV → SEED-V), significantly outperforming state-of-the-art baselines. These results confirm that C2FDA provides a semantic communication-driven solution for robust EEG-based emotion recognition in 6G-oriented human–machine interaction scenarios.
- New
- Research Article
- 10.1177/08919887251407123
- Dec 7, 2025
- Journal of Geriatric Psychiatry and Neurology
- Roberto Fernández-Fernández + 5 more
Objectives Social Cognition (SC) can be impaired in Parkinson’s Disease (PD), yet its longitudinal evolution relative to cognitive status is unclear. This study examined whether SC deficits in PD patients suffers different changes based on baseline cognitive status and cognitive progression. Methods In this observational study 48 non‐demented PD patients (32 with normal cognition [PD‐CN], 16 with mild cognitive impairment [PD‐MCI]), and 22 healthy controls (HC) were assessed at baseline and after three years. SC was assessed for facial emotion recognition (FER), affective and cognitive Theory of Mind (ToM), and social behavior. A comprehensive neuropsychological battery provided domain-specific z-scores. Cognitive classification followed MDS Level II criteria. Adjusted linear mixed models examined SC changes. Delta scores for SC tasks and z-score changes were correlated. Results At baseline, PD-MCI patients scored lower on cognitive ToM than PD-CN and HC, with no significant group differences in affective ToM, FER, or social behavior. Over three years, PD-MCI patients experienced a significant decline in cognitive ToM compared to PD-CN and HC, while affective ToM and emotion recognition declined only relative to HC. The converters (n = 16) to a worse cognitive state (PD-CN to PD-MCI or PD-MCI to PDD) showed lower baseline cognitive ToM and steeper decline than stable patients. All SC changes correlated with visuospatial ability; affective ToM also correlated with memory, language and attention, and FER with memory and executive function. Conclusions Cognitive ToM declines in parallel with cognitive deterioration in PD, while remaining stable in PD-CN. SC measures may help identify patients at higher risk of cognitive decline.
- New
- Research Article
- 10.1016/j.yebeh.2025.110839
- Dec 5, 2025
- Epilepsy & behavior : E&B
- Martin Simcik + 9 more
Social cognition after epilepsy surgery in temporal lobe epilepsy: A long-term follow-up.
- New
- Research Article
- 10.1038/s41598-025-25393-7
- Dec 5, 2025
- Scientific reports
- Eva Landmann + 2 more
In social interactions, we often encounter situations where a partner's face is (partially) occluded, e.g., when wearing a mask. While emotion recognition in static faces is known to be less accurate under such conditions, we investigated whether these detrimental effects extend to empathic responding, mentalizing (i.e., Theory of Mind), and prosociality in more naturalistic settings. In four studies (Ntotal = 157), we presented short video clips of narrators recounting neutral and emotionally negative autobiographical stories, with their faces shown in four conditions (two per experiment): fully visible, eyes covered, mouth covered, and audio-only. Participants then responded to questions assessing affect, mentalizing performance, and willingness to help. Affect ratings were slightly lower when the narrator's mouth was covered, and participants were less willing to help narrators with covered eyes. Importantly, however, empathic responding and mentalizing performance remained robust across visibility conditions. Thus, our findings suggest that social understanding - specifically, empathizing and mentalizing - is not substantially impeded by partial or complete facial occlusion, when other cues, such as vocal information, can be used to compensate. These insights may help contextualize concerns about detrimental effects of face coverage in social interactions.
- New
- Research Article
- 10.1177/10778012251401890
- Dec 5, 2025
- Violence against women
- Angelos Kissas + 1 more
This article enquires into the communication of femicides in Greece as a discursive struggle over the emotional and moral recognition of their victims, waged through social media platforms. It is, specifically, interested in how high-traffic Greek feminist community pages on Facebook and Instagram engaged with incidents of women killings in 2021, the year that femicide rates peaked in the country. The article suggests that these pages develop a feminist-populist critique of femicide caught up in the algorithmic bias of platformized communication and reflects on whether this critique can not only raise awareness of gendered violence but also highlight the structural conditions under which it occurs.
- New
- Research Article
- 10.1007/s11031-025-10170-w
- Dec 4, 2025
- Motivation and Emotion
- Emanuele Castano + 8 more
Abstract Research has shown an association between reading fiction and the ability to recognize emotions in both ourselves and others. Here we propose an account of these research findings which requires distinguishing between literary and popular fiction and, thus, between implicit and explicit emotionality in the language of fiction. We report a reanalysis of data from two studies showing that exposure to literary (but not popular) fiction is associated positively with emotion recognition in others (Study 1) and negatively with alexithymia - a deficit in recognizing our own emotion (Study 2). We then present findings from a corpus analysis (Study 3) showing that the likelihood that a novel is literary (vs. popular) increases with the degree of implicit (vs. explicit) emotionality in its language. These results suggest that our interpersonal and intrapersonal emotion recognition skills might benefit by reading fiction that triggers inferential processes about emotion, instead of fiction that is replete with emotion words.
- New
- Research Article
- 10.1080/02699931.2025.2596318
- Dec 4, 2025
- Cognition and Emotion
- Xu Luo + 2 more
ABSTRACT Subthreshold depression (StD), a subclinical depression state, exhibits high prevalence and elevates the risk of developing major depressive disorder. Previous studies have found that individuals with StD were impaired in facial emotional expression recognition, and yet these studies primarily used static rather than dynamical facial emotional expressions with relatively highly ecological validity. It remains unclear whether StD could be associated with impaired recognition of dynamic facial emotional expressions and whether the abnormalities could be stable over time. Forty-six individuals with StD and forty-five non-depressed individuals performed a dynamic and a static facial emotional expression recognition task, and they also performed a follow-up assessment with the same tasks as the initial assessment after a 4-month interval. In the dynamic task, StD individuals showed lower recognition thresholds only for the angry emotional expression at both the initial and follow-up assessments, compared to the non-depressed individuals. In the static task, the StD group demonstrated significantly higher accuracy only for angry expressions at the initial assessment but did not at the follow-up assessment. These results indicate that the dynamic facial expression recognition task, recruiting higher ecological validity relative to the static task, may be a potential tool as an auxiliary objective marker for depression.
- New
- Research Article
- 10.48175/ijarsct-30156
- Dec 4, 2025
- International Journal of Advanced Research in Science Communication and Technology
- Shreyas V + 4 more
The exponential growth of multimedia data across digital platforms has sparked an ever-increasing need for intelligent, automated video summarization systems that are capable of generating concise, emotionally engaging, and contextually relevant summaries. State-of-the-art practices for creating trailers and editing videos still rely on highly manual approaches, wherein editors go through hours of footage to identify significant scenes. This process is very time-consuming, labor-intensive, and biased by human judgment. It is definitely impractical for use on large-scale or real-time applications. This paper provides an extensive survey and in-depth analysis of human-in-the-loop, AI-assisted video summarization frameworks with a focus on emotion-based scene extraction and collaborative editing. The paper proposes a combined scheme: MTCNN for face detection, FaceNet for identity recognition, and CNNs for emotion classification. These deep learning models detect, track, and analyze emotional expressions throughout the frames to identify scenes with the most narrative and affectively important content. Further, the frame level processing and trailer compilation are done using OpenCV, while a Flask-based interactive interface is used by human editors to review and refine the AI-generated summaries in order to balance automation with creative input. This survey brings together thirteen key research works that cut across predictive modeling, multimodal emotion recognition, and collaboration between AI and humans. A clear demonstration of how human intuition, coupled with machine precision, can improve efficiency by reducing editing time as high as 70%, without sacrificing quality or emotional depth, is depicted in the results. It also establishes the fact that emotion-aware hybrid systems will eventually turn traditional video editing into an adaptive, scalable, intelligent process and open up a whole new dimension toward next- generation media production frameworks which can present emotionally resonant and narratively cohesive video summaries.
- New
- Research Article
- 10.1007/s00530-025-02088-7
- Dec 4, 2025
- Multimedia Systems
- Wenzhuo Liu + 2 more
DFGAnet: a dual-branch multimodal fusion network based on graph and attention for emotion recognition in conversation
- New
- Research Article
- 10.1007/s41870-025-02930-1
- Dec 4, 2025
- International Journal of Information Technology
- Rami Baazeem
Explainable cross-domain emotion recognition using non-linear optimization and multimodal feature fusion based deep learning model
- New
- Research Article
- 10.1088/2057-1976/ae1dfd
- Dec 3, 2025
- Biomedical Physics & Engineering Express
- Lizheng Pan + 3 more
The recognition of the subject's emotional states is of great significance for achieving humanized services in many scenarios with human-computer interaction. Recently, identification of the emotional states based on electroencephalogram (EEG) has received increasing attention. However, due to the complexity of EEG signals, EEG-based emotion recognition is very challenging. In this research, a novel BrainEmoNet with learning-based framework is proposed to improve the emotion recognition accuracy from the perspective of the asymmetry of human brain functions. The BrainEmoNet consists of frequency-domain feature network (FFN), long-term dependent feature network (LDFN) and spatial characteristic analysis network (SCAN). The parallel FFN and LDFN are suggested to extract the frequency-domain and long-term dependent features of the information in each brain channel, respectively. Meanwhile, based on the working principle of the human brain, the SCAN with channel-spatial attention mechanism is proposed to focus on the high-value information channels with assigning adaptive weights and analyze the spatial characteristics of the frequency-domain and time-domain features. The feature analysis in the time-frequency-spatial perspective can fully explore the emotional information contained in EEG information. Experimental results on multi-modal DEAP dataset presents the competitive performances of the BrainEmoNet over the existing state-of-the-art models. In the subject-dependent experiments, the proposed model achieves identification accuracies of 86.77% and 82.14% in arousal and valence dimensions, respectively, compared to 75.53% and 72.83% in the subject-independent experiments. The proposed BrainEmoNet model in this research can be used as an auxiliary tool for the assessment or monitoring of emotions.
- New
- Research Article
- 10.1145/3770633
- Dec 2, 2025
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
- Gyeongwon Cha + 4 more
Emotion recognition has been an actively researched topic in the field of HCI. However, multimodal datasets used for emotion recognition often contain sensitive personal information, such as physiological signals, facial images, and behavioral patterns, raising significant privacy concerns. In particular, the privacy issues become crucial in workplace settings because of the risks such as surveillance and unauthorized data usage caused by the misuse of collected datasets. To address this issue, we propose an Encrypted Emotion Recognition (EER) framework that performs real-time inference on encrypted data using the CKKS homomorphic encryption (HE) scheme. We evaluated the proposed framework using publicly available WESAD and Hide-and-seek datasets, demonstrating successful stress/emotion recognition under encryption. The results demonstrated that encrypted inference achieved similar accuracy to plaintext inference, with accuracy of 0.966 (plaintext) vs. 0.967 (ciphertext) on the WESAD dataset, and 0.868 for both cases on the Hide-and-Seek dataset. Encrypted inference was performed on a GPU, with average inference times of 333 milliseconds for the general model and 455 milliseconds for the personalized model. Furthermore, we validated the feasibility of semi-supervised learning and model personalization in encrypted environments, enhancing the framework's real-world applicability. Our findings suggest that the EER framework provides a scalable, privacy-preserving solution for emotion recognition in domains such as healthcare and workplace settings, where securing sensitive data is of critical importance.
- New
- Research Article
- 10.1007/s40617-025-01133-1
- Dec 2, 2025
- Behavior Analysis in Practice
- Lydia S Lindsey + 2 more
Abstract Many social skills, such as empathic responding, social referencing, and facial emotion recognition, require a variety of conditional discriminations under a wide array of stimulus conditions. Proficiency with these responses in the natural environment would involve the ability to identify a variety of emotions across a wide array of faces, genders, ages, ethnicities, and contexts. Using empirically validated stimuli within assessment contexts that represent a wide spectrum of diverse variations across relevant features increases the likelihood of teaching stimulus discriminations necessary for broadly applicable emotion tacting skills. Currently, there is little guidance in behavior analysis on how to conduct a comprehensive assessment of emotions tacting across diverse demographics using empirically validated stimuli. Therefore, this manuscript provides an example process we adopted to create a preliminary assessment of facial emotion recognition that includes empirically validated stimuli representing a multitude of diverse faces, which we named the “Measurement of Emotions Tacting for Empathic Responding” (METER). It is our hope this assessment tutorial will help bring awareness to the importance of identifying appropriate validated and demographically diverse stimuli, the issues that may arise from overlooking the importance of the stimuli we use to assess and teach complex social skills, and to encourage researchers and practitioners to develop inclusive assessments for a variety of social skills using validated and diverse stimuli to aid in developing both targeted and socially valid interventions.
- New
- Research Article
1
- 10.1016/j.ijcce.2024.11.008
- Dec 1, 2025
- International Journal of Cognitive Computing in Engineering
- Xueliang Kang
Speech Emotion Recognition Algorithm of Intelligent Robot Based on ACO-SVM
- New
- Research Article
- 10.1007/s11571-025-10328-9
- Dec 1, 2025
- Cognitive neurodynamics
- Abgeena Abgeena + 1 more
Emotion recognition (ER) is crucial for understanding human behaviours, social interactions, and psychological well-being. Electroencephalography (EEG) has emerged as a promising tool for capturing the neural correlates of emotions. This work is a systematic review of articles in ER using EEG signals. A total of 120 articles from 1041 articles were selected based on PRISMA guidelines using defined inclusion and exclusion criteria, published between 2018 and 2024. This article aims to provide an in-depth understanding of the current landscape of ER from EEG signals utilizing deep learning (DL). This review offers valuable guidance for researchers and practitioners seeking more refined and reliable emotion classification systems. To explore the effectiveness of DL models in EEG-based ER, several potential DL models, such as convolutional neural network, long short-term memory (LSTM), gated recurrent unit (GRU), hybrid bidirectional LSTM (BiLSTM), bidirectional GRU, and advanced DL models such as convolutional recurrent neural network and EEG-Conformer models are applied to two popular datasets, SEED and GAMEEMO, respectively, to depict the full process of ER. Additionally, the performance of DL models is also compared with the performance of basic machine learning (ML) models such as SVM, k-nearest neighbors, logistic regression, and boosting algorithms such as AdaBoost, XGBoost and LightGBM. Through extensive experiments and performance evaluations, the performance of different models when applied to the datasets mentioned above is compared. The accuracy, precision, recall, and F1-scores are analysed to determine the most effective model for EEG-based ER. The findings of this study demonstrate that the performance of hybrid DL models is more efficacious than that of ML models. The best-performing model (BiLSTM) classified the emotions, with an accuracy of 90.54% when applied to the GAMEEMO dataset. This research contributes to the growing body of literature on ER and provides insights into the feasibility of using EEG signals to understand emotional states, and presents a structured roadmap for future exploration. The findings can aid in the development of more accurate and reliable ER systems, which can have wide-ranging applications in psychology, social sciences, and human-computer interactions.
- New
- Research Article
- 10.1007/s11571-025-10324-z
- Dec 1, 2025
- Cognitive neurodynamics
- Xiaodan Zhang + 5 more
EEG signal is being widely used in the field of emotion recognition, which currently suffers from the difficulty of obtaining highly distinguishable features. We propose CNN-BiLSTM-CS for emotion recognition EEG-based, which is to address the shortcomings of the traditional LSTM unidirectional propagation and Softmax supervised model in feature extraction. The method firstly employs BiLSTM to CNN, which can bilaterally obtain emotion feature information, and then introduces Center and Softmax to form a joint loss function to minimize the intra-class distance and maximize the inter-class distance, which can improve the recognition ability. DEAP and SEED dataset are employed to test the performance of CNN-BiLSTM-CS. The results of the average accuracy of valence and arousal are 94.22% and 92.16% on DEAP, which is increase by almost 6% to CNN-LSTM. The triple categorization accuracy of the SEED dataset is 95.45%. CNN-BiLSTM-CS significantly improves the recognition performance of deep features of EEG through the improved network structure and combined loss function.
- New
- Research Article
- 10.1016/j.mex.2025.103468
- Dec 1, 2025
- MethodsX
- Rabita Hasan + 1 more
A comparative analysis of emotion recognition from EEG signals using temporal features and hyperparameter-tuned machine learning techniques.
- New
- Research Article
- 10.1016/j.scog.2025.100382
- Dec 1, 2025
- Schizophrenia research. Cognition
- K Van Der Walt + 10 more
Exploring the relationships between Early Childhood Adversity, Social Cognition, and Aggression in a South African Study of People Living with Schizophrenia.
- New
- Research Article
- 10.1016/j.cortex.2025.09.014
- Dec 1, 2025
- Cortex; a journal devoted to the study of the nervous system and behavior
- Fabio Campanella + 3 more
Face processing deficits following brain tumours: Behavioural correlates and surgery-sensitive hotspots.
- New
- Research Article
- 10.1007/s11517-025-03430-x
- Dec 1, 2025
- Medical & biological engineering & computing
- Qiaoli Zhou + 5 more
Electroencephalography (EEG) usage in emotion recognition has garnered significant interest in brain-computer interface (BCI) research. Nevertheless, in order to develop an effective model for emotion identification, features need to be extracted from EEG data in terms of multi-view. In order to tackle the problems of multi-feature interaction and domain adaptation, we suggest an innovative network, IF-MMCL, which leverages multi-modal data in multi-view representation and integrates an individual focused network. In our approach, we build an individual focused network with multi-view that utilizes individual focused contrastive learning to improve model generalization. The network employs different structures for multi-view feature extraction and uses multi-feature relationship computation to identify the relationships between features from various views and modalities. Our model is validated using four public emotion datasets, each containing various emotion classification tasks. In leave-one-subject-out experiments, IF-MMCL performs better than the previous methods in model generalization with limited data.