Articles published on emotion-recognition
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
15205 Search results
Sort by Recency
- New
- Research Article
- 10.1007/s11571-025-10372-5
- Dec 1, 2025
- Cognitive neurodynamics
- Linlin Li + 1 more
The online version contains supplementary material available at 10.1007/s11571-025-10372-5.
- New
- Research Article
- 10.1016/j.bspc.2025.108165
- Dec 1, 2025
- Biomedical Signal Processing and Control
- Hairui Fang + 6 more
Development of a non-attached multi-person emotion recognition system based on sitting body motion signals
- New
- Research Article
- 10.1016/j.bspc.2025.108111
- Dec 1, 2025
- Biomedical Signal Processing and Control
- Akhilesh Kumar + 1 more
EEG-based emotion recognition: A deep learning approach to brain region analysis
- New
- Research Article
- 10.1016/j.neucom.2025.131715
- Dec 1, 2025
- Neurocomputing
- Jiahao Tang + 11 more
UDA-DDA: Unsupervised domain adaptation with dynamic distribution alignment network for emotion recognition using EEG signals
- New
- Research Article
- 10.1016/j.neucom.2025.131577
- Dec 1, 2025
- Neurocomputing
- Yong Zhang + 4 more
MPFBL: Modal pairing-based cross-fusion bootstrap learning for multimodal emotion recognition
- New
- Research Article
- 10.1016/j.engappai.2025.112447
- Dec 1, 2025
- Engineering Applications of Artificial Intelligence
- Aziguli Wulamu + 5 more
Enhanced multi-modal emotion recognition using the feature level fusion
- New
- Research Article
- 10.1016/j.bspc.2025.108151
- Dec 1, 2025
- Biomedical Signal Processing and Control
- M Chaitanya Bharathi + 1 more
Multi-dimensional input-based Adaptive Residual DenseNet with Attention Mechanism for patient emotion recognition from multi-modal data
- New
- Research Article
- 10.1016/j.inffus.2025.103335
- Dec 1, 2025
- Information Fusion
- Sainan Zhang + 4 more
MATADOR: Multimodal traffic accident prediction enhanced by multi-source aggregated emotion recognition
- New
- Research Article
- 10.1016/j.apacoust.2025.110963
- Dec 1, 2025
- Applied Acoustics
- Ismail Shahin + 4 more
Two-stage emotion recognition framework using CNN–transformer architecture and speaker cues
- New
- Research Article
- 10.1016/j.measurement.2025.118165
- Dec 1, 2025
- Measurement
- Ravi + 1 more
A filtering approach for speech emotion recognition using wavelet approximation coefficient
- New
- Research Article
- 10.1016/j.apacoust.2025.110905
- Dec 1, 2025
- Applied Acoustics
- Astha Tripathi + 1 more
Multilingual speech emotion recognition using IGRFXG – Ensemble feature selection approach
- New
- Research Article
- 10.1016/j.eswa.2025.128605
- Dec 1, 2025
- Expert Systems with Applications
- Chang Wang + 3 more
Bimodal speech emotion recognition via contrastive self-alignment learning
- New
- Research Article
- 10.1016/j.engappai.2025.111969
- Dec 1, 2025
- Engineering Applications of Artificial Intelligence
- Dae Hyeon Kim + 1 more
Semi-supervised graph contrastive learning for emotion recognition based on electroencephalogram signals
- New
- Research Article
- 10.1016/j.engappai.2025.112422
- Dec 1, 2025
- Engineering Applications of Artificial Intelligence
- Dongdong Li + 3 more
Resource-efficient cross-subject emotion recognition from electroencephalogram via spiking domain discriminators
- New
- Research Article
- 10.1016/j.nanoen.2025.111483
- Dec 1, 2025
- Nano Energy
- Wenyan Qiao + 10 more
Deep learning-assisted high sensitivity acoustic sensor for enhanced auditory robot real-time emotion recognition
- New
- Research Article
- 10.1016/j.bspc.2025.108231
- Dec 1, 2025
- Biomedical Signal Processing and Control
- Shuaiqi Liu + 6 more
Cross-subject emotion recognition by EEG driven spatio-temporal hybrid network based on domain adaptation and dynamic graph attention
- New
- Research Article
- 10.1016/j.neucom.2025.131749
- Dec 1, 2025
- Neurocomputing
- Kaiwei Shen + 4 more
Dynamic sparse directed graph convolutional network with attention mechanisms for EEG emotion recognition
- New
- Research Article
- 10.31083/jin44121
- Nov 27, 2025
- Journal of integrative neuroscience
- Jiaqi Yang + 3 more
This study addressed three key challenges in subject-independent electroencephalography (EEG) emotion recognition: limited data availability, restricted cross-domain knowledge transfer, and suboptimal feature extraction. The aim is to develop an innovative framework that enhances recognition performance while preserving data privacy. This study introduces a novel multi-teacher knowledge distillation framework that incorporates data privacy considerations. The framework first comprises n subnets, each sequentially trained on distinct EEG datasets without data sharing. The subnets, excluding the initial one, acquire knowledge through the weights and features of all preceding subnets, enabling access to more EEG signals during the training process while maintaining privacy. To enhance cross-domain knowledge transfer, a multi-teacher knowledge distillation strategy was designed, featuring knowledge filters and adaptive multi-teacher knowledge distillation losses. The knowledge filter integrates cross-domain information using a multi-head attention module with a gate mechanism, ensuring effective inheritance of knowledge from all previous subnets. Simultaneously, the adaptive multi-teacher knowledge distillation loss dynamically adjusts the direction of knowledge transfer based on filtered feature similarity, preventing knowledge loss in single-teacher models. Furthermore, a spatio-temporal gate module is proposed to eliminate unnecessary frame-level information from different channels and extract important channels for improved feature representation without requiring expert knowledge. Experimental results demonstrate the superiority of the proposed method over the current state of the art, achieving a 2% performance improvement on the DEAP dataset. The proposed multi-teacher distillation framework with data privacy addresses the challenges of insufficient data availability, limited cross-domain knowledge transfer, and suboptimal feature extraction in subject-independent EEG emotion recognition, demonstrating strong potential for scalable and privacy-preserving emotion recognition applications.
- New
- Research Article
- 10.62583/rseltl.v3i6.116
- Nov 27, 2025
- Research Studies in English Language Teaching and Learning
- Rseltl Journal + 1 more
This qualitative investigation investigates the incorporation of Social-Emotional Learning (SEL) into Task-Based Language Teaching (TBLT) for English language learners within a post-pandemic Saudi Arabian university setting. Being aware of the increased necessity to support students' emotional and social well-being as well as their academic recovery, the investigation explored 30 EFL students' experience via reflective journaling, interviews, and observations across a six-week SEL-enhanced TBLT intervention. Results reported that integrating SEL values into communicative activities promoted balanced growth with a resultant six major outcomes: increased emotional engagement with learning, enhanced empathy in group work, improved reflective recognition of emotions, increased sense of classroom safety and confidence, enhanced social support and belonging with peers, and significant personal development and self-discovery. The findings show that this combined practice efficiently met both students' linguistic and their socio-emotional necessities concurrently, reshaping the foreign language classroom as a venue for rebuilding communicative proficiency and emotional resilience. The study concludes that SEL-enriched TBLT is a robust, comprehensive pedagogical framework for post-pandemic education, fostering a whole learner with a simultaneous build-up in the aspects of emotional intelligence, social competence, and linguistic ability.
- New
- Research Article
- 10.3390/e27121201
- Nov 26, 2025
- Entropy
- Michael Norval + 1 more
We evaluate a hybrid quantum–classical pipeline for speech emotion recognition (SER) on a custom Afrikaans corpus using MFCC-based spectral features with pitch and energy variants, explicitly comparing three quantum approaches—a variational quantum classifier (VQC), a quantum support vector machine (QSVM), and a Quantum Approximate Optimisation Algorithm (QAOA)-based classifier—against a CNN–LSTM (CLSTM) baseline. We detail the classical-to-quantum data encoding (angle embedding with bounded rotations and an explicit feature-to-qubit map) and report test accuracy, weighted precision, recall, and F1. Under ideal analytic simulation, the quantum models reach 41–43% test accuracy; under a realistic 1% NISQ noise model (100–1000 shots) this degrades to 34–40%, versus 73.9% for the CLSTM baseline. Despite the markedly lower empirical accuracy—expected in the NISQ era—we provide an end-to-end, noise-aware hybrid SER benchmark and discuss the asymptotic advantages of quantum subroutines (Chebyshev-based quantum singular value transformation, quantum walks, and block encoding) that become relevant only in the fault-tolerant regime.