• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Emotion Recognition Task
  • Emotion Recognition Task
  • Emotion Recognition System
  • Emotion Recognition System
  • Emotional Speech
  • Emotional Speech
  • Affect Recognition
  • Affect Recognition

Articles published on emotion-recognition

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
15205 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1007/s11571-025-10372-5
Cross-modal alignment and fusion of EEG-visual based on mixed attention mechanism for emotion recognition.
  • Dec 1, 2025
  • Cognitive neurodynamics
  • Linlin Li + 1 more

The online version contains supplementary material available at 10.1007/s11571-025-10372-5.

  • New
  • Research Article
  • 10.1016/j.bspc.2025.108165
Development of a non-attached multi-person emotion recognition system based on sitting body motion signals
  • Dec 1, 2025
  • Biomedical Signal Processing and Control
  • Hairui Fang + 6 more

Development of a non-attached multi-person emotion recognition system based on sitting body motion signals

  • New
  • Research Article
  • 10.1016/j.bspc.2025.108111
EEG-based emotion recognition: A deep learning approach to brain region analysis
  • Dec 1, 2025
  • Biomedical Signal Processing and Control
  • Akhilesh Kumar + 1 more

EEG-based emotion recognition: A deep learning approach to brain region analysis

  • New
  • Research Article
  • 10.1016/j.neucom.2025.131715
UDA-DDA: Unsupervised domain adaptation with dynamic distribution alignment network for emotion recognition using EEG signals
  • Dec 1, 2025
  • Neurocomputing
  • Jiahao Tang + 11 more

UDA-DDA: Unsupervised domain adaptation with dynamic distribution alignment network for emotion recognition using EEG signals

  • New
  • Research Article
  • 10.1016/j.neucom.2025.131577
MPFBL: Modal pairing-based cross-fusion bootstrap learning for multimodal emotion recognition
  • Dec 1, 2025
  • Neurocomputing
  • Yong Zhang + 4 more

MPFBL: Modal pairing-based cross-fusion bootstrap learning for multimodal emotion recognition

  • New
  • Research Article
  • 10.1016/j.engappai.2025.112447
Enhanced multi-modal emotion recognition using the feature level fusion
  • Dec 1, 2025
  • Engineering Applications of Artificial Intelligence
  • Aziguli Wulamu + 5 more

Enhanced multi-modal emotion recognition using the feature level fusion

  • New
  • Research Article
  • 10.1016/j.bspc.2025.108151
Multi-dimensional input-based Adaptive Residual DenseNet with Attention Mechanism for patient emotion recognition from multi-modal data
  • Dec 1, 2025
  • Biomedical Signal Processing and Control
  • M Chaitanya Bharathi + 1 more

Multi-dimensional input-based Adaptive Residual DenseNet with Attention Mechanism for patient emotion recognition from multi-modal data

  • New
  • Research Article
  • 10.1016/j.inffus.2025.103335
MATADOR: Multimodal traffic accident prediction enhanced by multi-source aggregated emotion recognition
  • Dec 1, 2025
  • Information Fusion
  • Sainan Zhang + 4 more

MATADOR: Multimodal traffic accident prediction enhanced by multi-source aggregated emotion recognition

  • New
  • Research Article
  • 10.1016/j.apacoust.2025.110963
Two-stage emotion recognition framework using CNN–transformer architecture and speaker cues
  • Dec 1, 2025
  • Applied Acoustics
  • Ismail Shahin + 4 more

Two-stage emotion recognition framework using CNN–transformer architecture and speaker cues

  • New
  • Research Article
  • 10.1016/j.measurement.2025.118165
A filtering approach for speech emotion recognition using wavelet approximation coefficient
  • Dec 1, 2025
  • Measurement
  • Ravi + 1 more

A filtering approach for speech emotion recognition using wavelet approximation coefficient

  • New
  • Research Article
  • 10.1016/j.apacoust.2025.110905
Multilingual speech emotion recognition using IGRFXG – Ensemble feature selection approach
  • Dec 1, 2025
  • Applied Acoustics
  • Astha Tripathi + 1 more

Multilingual speech emotion recognition using IGRFXG – Ensemble feature selection approach

  • New
  • Research Article
  • 10.1016/j.eswa.2025.128605
Bimodal speech emotion recognition via contrastive self-alignment learning
  • Dec 1, 2025
  • Expert Systems with Applications
  • Chang Wang + 3 more

Bimodal speech emotion recognition via contrastive self-alignment learning

  • New
  • Research Article
  • 10.1016/j.engappai.2025.111969
Semi-supervised graph contrastive learning for emotion recognition based on electroencephalogram signals
  • Dec 1, 2025
  • Engineering Applications of Artificial Intelligence
  • Dae Hyeon Kim + 1 more

Semi-supervised graph contrastive learning for emotion recognition based on electroencephalogram signals

  • New
  • Research Article
  • 10.1016/j.engappai.2025.112422
Resource-efficient cross-subject emotion recognition from electroencephalogram via spiking domain discriminators
  • Dec 1, 2025
  • Engineering Applications of Artificial Intelligence
  • Dongdong Li + 3 more

Resource-efficient cross-subject emotion recognition from electroencephalogram via spiking domain discriminators

  • New
  • Research Article
  • 10.1016/j.nanoen.2025.111483
Deep learning-assisted high sensitivity acoustic sensor for enhanced auditory robot real-time emotion recognition
  • Dec 1, 2025
  • Nano Energy
  • Wenyan Qiao + 10 more

Deep learning-assisted high sensitivity acoustic sensor for enhanced auditory robot real-time emotion recognition

  • New
  • Research Article
  • 10.1016/j.bspc.2025.108231
Cross-subject emotion recognition by EEG driven spatio-temporal hybrid network based on domain adaptation and dynamic graph attention
  • Dec 1, 2025
  • Biomedical Signal Processing and Control
  • Shuaiqi Liu + 6 more

Cross-subject emotion recognition by EEG driven spatio-temporal hybrid network based on domain adaptation and dynamic graph attention

  • New
  • Research Article
  • 10.1016/j.neucom.2025.131749
Dynamic sparse directed graph convolutional network with attention mechanisms for EEG emotion recognition
  • Dec 1, 2025
  • Neurocomputing
  • Kaiwei Shen + 4 more

Dynamic sparse directed graph convolutional network with attention mechanisms for EEG emotion recognition

  • New
  • Research Article
  • 10.31083/jin44121
A Multi-Teacher Distilling Framework With Data Privacy for EEG Emotion Recognition.
  • Nov 27, 2025
  • Journal of integrative neuroscience
  • Jiaqi Yang + 3 more

This study addressed three key challenges in subject-independent electroencephalography (EEG) emotion recognition: limited data availability, restricted cross-domain knowledge transfer, and suboptimal feature extraction. The aim is to develop an innovative framework that enhances recognition performance while preserving data privacy. This study introduces a novel multi-teacher knowledge distillation framework that incorporates data privacy considerations. The framework first comprises n subnets, each sequentially trained on distinct EEG datasets without data sharing. The subnets, excluding the initial one, acquire knowledge through the weights and features of all preceding subnets, enabling access to more EEG signals during the training process while maintaining privacy. To enhance cross-domain knowledge transfer, a multi-teacher knowledge distillation strategy was designed, featuring knowledge filters and adaptive multi-teacher knowledge distillation losses. The knowledge filter integrates cross-domain information using a multi-head attention module with a gate mechanism, ensuring effective inheritance of knowledge from all previous subnets. Simultaneously, the adaptive multi-teacher knowledge distillation loss dynamically adjusts the direction of knowledge transfer based on filtered feature similarity, preventing knowledge loss in single-teacher models. Furthermore, a spatio-temporal gate module is proposed to eliminate unnecessary frame-level information from different channels and extract important channels for improved feature representation without requiring expert knowledge. Experimental results demonstrate the superiority of the proposed method over the current state of the art, achieving a 2% performance improvement on the DEAP dataset. The proposed multi-teacher distillation framework with data privacy addresses the challenges of insufficient data availability, limited cross-domain knowledge transfer, and suboptimal feature extraction in subject-independent EEG emotion recognition, demonstrating strong potential for scalable and privacy-preserving emotion recognition applications.

  • New
  • Research Article
  • 10.62583/rseltl.v3i6.116
Beyond fluency: integrating social–emotional learning into task-based language teaching for the whole learner
  • Nov 27, 2025
  • Research Studies in English Language Teaching and Learning
  • Rseltl Journal + 1 more

This qualitative investigation investigates the incorporation of Social-Emotional Learning (SEL) into Task-Based Language Teaching (TBLT) for English language learners within a post-pandemic Saudi Arabian university setting. Being aware of the increased necessity to support students' emotional and social well-being as well as their academic recovery, the investigation explored 30 EFL students' experience via reflective journaling, interviews, and observations across a six-week SEL-enhanced TBLT intervention. Results reported that integrating SEL values into communicative activities promoted balanced growth with a resultant six major outcomes: increased emotional engagement with learning, enhanced empathy in group work, improved reflective recognition of emotions, increased sense of classroom safety and confidence, enhanced social support and belonging with peers, and significant personal development and self-discovery. The findings show that this combined practice efficiently met both students' linguistic and their socio-emotional necessities concurrently, reshaping the foreign language classroom as a venue for rebuilding communicative proficiency and emotional resilience. The study concludes that SEL-enriched TBLT is a robust, comprehensive pedagogical framework for post-pandemic education, fostering a whole learner with a simultaneous build-up in the aspects of emotional intelligence, social competence, and linguistic ability.

  • New
  • Research Article
  • 10.3390/e27121201
Quantum AI in Speech Emotion Recognition
  • Nov 26, 2025
  • Entropy
  • Michael Norval + 1 more

We evaluate a hybrid quantum–classical pipeline for speech emotion recognition (SER) on a custom Afrikaans corpus using MFCC-based spectral features with pitch and energy variants, explicitly comparing three quantum approaches—a variational quantum classifier (VQC), a quantum support vector machine (QSVM), and a Quantum Approximate Optimisation Algorithm (QAOA)-based classifier—against a CNN–LSTM (CLSTM) baseline. We detail the classical-to-quantum data encoding (angle embedding with bounded rotations and an explicit feature-to-qubit map) and report test accuracy, weighted precision, recall, and F1. Under ideal analytic simulation, the quantum models reach 41–43% test accuracy; under a realistic 1% NISQ noise model (100–1000 shots) this degrades to 34–40%, versus 73.9% for the CLSTM baseline. Despite the markedly lower empirical accuracy—expected in the NISQ era—we provide an end-to-end, noise-aware hybrid SER benchmark and discuss the asymptotic advantages of quantum subroutines (Chebyshev-based quantum singular value transformation, quantum walks, and block encoding) that become relevant only in the fault-tolerant regime.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers