• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Emotion Recognition Task
  • Emotion Recognition Task
  • Emotion Recognition System
  • Emotion Recognition System
  • Emotional Speech
  • Emotional Speech
  • Affect Recognition
  • Affect Recognition

Articles published on emotion-recognition

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
15205 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1016/j.neunet.2025.108324
Cross-subject emotion recognition with loop adaptive adversarial transfer network.
  • Nov 12, 2025
  • Neural networks : the official journal of the International Neural Network Society
  • Feifan Yan + 8 more

Cross-subject emotion recognition with loop adaptive adversarial transfer network.

  • New
  • Research Article
  • 10.1108/apjba-09-2023-0441
Advancement of neuroscience in different domains of organizational behavior: review, process and future research direction
  • Nov 12, 2025
  • Asia-Pacific Journal of Business Administration
  • Rachana Chattopadhyay

Purpose Over the past 2 decades, researchers have increasingly explored the intersection of neuroscience and organizational behavior (OB). However, the practical application and academic acceptance of neuroscience techniques within OB remain limited. This article provides a structured and comprehensive synthesis of how neuroscientific methods, such as fMRI and qEEG, are advancing research in key OB domains, including leadership, emotional intelligence, team dynamics and ethical decision-making. Design/methodology/approach A systematic literature review was conducted using the PRISMA framework. Articles published between 2002 and 2022 were selected through Scopus and Web of Science. Additionally, recent insights from 2023 to 2024 were manually reviewed and integrated to maintain relevance and currency. Findings The review reveals that neuroscience can enhance construct validity, reduce subjectivity, and uncover cognitive and affective processes underlying workplace behaviors. Key advancements include the neural mapping of leadership traits, emotion recognition and justice perception. The findings also highlight operational and ethical challenges that limit broader application in organizational settings. Originality/value This study bridges the gap between OB scholars and neuroscientists by offering a domain-specific synthesis and identifying research frontiers for cross-disciplinary collaboration. By incorporating recent evidence and proposing actionable future directions, the review offers a timely roadmap for integrating neuroscience into mainstream organizational research and practice.

  • Research Article
  • 10.3390/app152211971
Multimodal Emotion Recognition in Conversations Using Transformer and Graph Neural Networks
  • Nov 11, 2025
  • Applied Sciences
  • Hua Jin + 4 more

To comprehensively capture conversational emotion information within and between modalities, address the challenge of global and local feature modelling in conversation, and enhance the accuracy of multimodal conversation emotion recognition, we present a model called Multimodal Transformer and GNN for Emotion Recognition in Conversations (MTG-ERC). The model incorporates a multi-level Transformer fusion module that employs multi-head self-attention and cross-modal attention mechanisms to effectively capture interaction patterns within and between modalities. To address the shortcomings of attention-mechanism-based models in capturing short-term dependencies, we introduce a directed multi-relational graph fusion module, which employs directed graphs and multiple relation types to achieve efficient multimodal information fusion and to model short-term, speaker-dependent emotional shifts. By integrating the outputs of these two modules, the MTG-ERC model effectively combines global and local conversational emotion features and enhances intra-modal and inter-modal emotional interactions. The proposed model shows consistent improvements (around 1% absolute) in both accuracy and weighted F1 on the IEMOCAP and MELD datasets when compared with other baseline models. This highlights the model’s strong performance indicators and validates its effectiveness in comparison to existing models.

  • Research Article
  • 10.55041/ijsrem53820
EmoLearn: Emotion-Adaptive E-Learning for Inclusive Education Using Real-Time Webcam
  • Nov 11, 2025
  • INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Dr Mrs Pratibha Adkar + 4 more

Abstract This project is about creating a smart online learning platform that understands how students feel while they study. It uses AI to read facial expressions and respond in real time—like cheering someone up when they’re frustrated, making lessons easier if they’re struggling, or suggesting a break when they seem tired. The goal is to make learning more supportive and personal for everyone, no matter their age or ability. It’s designed to be inclusive, so all learners feel understood and empowered. Keywords: Emotion-responsive learning, Adaptive e-learning, Real-time emotion detection, Inclusive education, Facial expression analysis, Emotion recognition via webcam, Emotion-adaptive content delivery.

  • Research Article
  • 10.3758/s13415-025-01364-3
The effect of working memory load on selective attention towards threatening faces in socially anxious individuals: Behavioural and electrophysiological evidence.
  • Nov 11, 2025
  • Cognitive, affective & behavioral neuroscience
  • Mingfan Liu + 3 more

Cognitive models of anxiety propose that attention bias towards threat causes and maintains anxiety symptoms. However, the effect of working memory (WM) load on selective attention of threatening faces in individuals with socially anxious symptoms and the electrophysiological correlates are unclear. Event-related potentials (ERPs) were recorded from 30 socially anxious participants and 32 controls during an adapted emotional flanker task. Overall, socially anxious individuals showed worse accuracy and slower reaction times (RTs) in facial emotion recognition than controls. Furthermore, under high WM load, the N2 amplitudes for targets flanked by angry distractors was significantly larger than that for targets flanked by happy distractors and the late positive potential (LPP) amplitudes for angry targets was significantly larger than that for happy targets in socially anxious participants. No such effects were found for N2 and LPP amplitudes under low WM load. The results suggest the impairment of top-down cognitive control in socially anxious individuals. The increased N2 amplitudes for targets flanked by angry distractors and LPP amplitudes for angry targets under high WM load in socially anxious individuals may be related to enhanced conflict monitoring and perceptual engagement for threatening faces under conditions where cognitive resources are taxed.

  • Research Article
  • 10.1007/s10994-025-06921-y
Model-driven validation of visual explanations for multimodal emotion recognition
  • Nov 10, 2025
  • Machine Learning
  • Guido Gagliardi + 5 more

Abstract AI-based emotion recognition approaches may benefit from the integration of multimodal data, but their explainability and validation is still a critical challenge. Indeed, the limited neurophysiological understanding of novel multimodal features, e.g. brain-heart interaction, can be insufficient to assess whether the AI-extracted physiological insights (i.e., the model explanations) accurately reflect the real underlying physiological processes. To validate the explanations obtained by an AI-based model in this context, we introduce a novel framework that autonomously identifies the optimal explanations for a black-box model used in emotion recognition. Our approach leverages a convolutional neural network to process BHI features, which are derived from EEG and HRV data and rearranged as images. A model-agnostic methodology is employed to extract local explanations, which are then dynamically evaluated to select the most accurate for representing specific emotional states. The effectiveness of the proposed framework is evaluated across multiple classification tasks, including up to 9-level arousal and valence emotion classification, as well as nine discrete emotions classification, using the MAHNOB-HCI and DEAP datasets. The system achieved remarkable accuracy levels, consistently reaching approximately 97–98% across all tasks. Furthermore, our dynamic selection framework revealed that Integrated Gradients outperformed other state-of-the-art explainable AI approaches in reliably capturing global explanations.

  • Research Article
  • 10.1177/13670069251384080
The relationship between language and emotion recognition in bilinguals is not robust to cultural and linguistic differences
  • Nov 8, 2025
  • International Journal of Bilingualism
  • Marta Szreder

Aims: This study set out to test the hypothesis that proficiency in a second language can lead to emotional advantage, via increased Emotional Intelligence and improved Facial Emotion Recognition (FER). Design: Unlike previous studies, this project adopted a within-subject design, rather than comparing bi- and monolinguals. We investigated the participants’ performance on FER tasks, as a function of their second-language English proficiency and trait emotional intelligence. Data and analysis: Using an online experimental task, we tested FER in static posed photographs in 256 adult participants with a wide range of native languages. To examine the role of task type, multiple-choice and free-labelling protocols were used. We collected self-reported measures of L2 English proficiency and administered a direct proficiency measure, as well as a measure of trait emotional intelligence. Multiple regression analysis was used to examine the relationship between the variables. Findings: The analysis revealed only a relationship between the direct proficiency measure and the multiple-choice FER task, but no effect of trait emotional intelligence or self-reported L2 English proficiency. Originality: This study contradicts previous findings based on across-subject comparisons in linguistically and culturally homogeneous populations. Implications: The results suggests that the relationship between bilingualism and FER is sensitive to methodological, cultural, and linguistic differences. Future investigations of the relationship between language and emotion in bilinguals should take that into consideration.

  • Research Article
  • 10.1016/j.chiabu.2025.107787
Childhood maltreatment influences parental mimicry of children's emotional facial expressions.
  • Nov 7, 2025
  • Child abuse & neglect
  • Annie Bérubé + 4 more

Childhood maltreatment influences parental mimicry of children's emotional facial expressions.

  • Research Article
  • 10.3390/bioengineering12111220
EEG-Based Local–Global Dimensional Emotion Recognition Using Electrode Clusters, EEG Deformer, and Temporal Convolutional Network
  • Nov 7, 2025
  • Bioengineering
  • Hyoung-Gook Kim + 1 more

Emotions are complex phenomena arising from cooperative interactions among multiple brain regions. Electroencephalography (EEG) provides a non-invasive means to observe such neural activity; however, as it captures only electrode-level signals from the scalp, accurately classifying dimensional emotions requires considering both local electrode activity and the global spatial distribution across the scalp. Motivated by this, we propose a brain-inspired EEG electrode-cluster-based framework for dimensional emotion classification. The model organizes EEG electrodes into nine clusters based on spatial and functional proximity, applying an EEG Deformer to each cluster to learn frequency characteristics, temporal dynamics, and local signal patterns. The features extracted from each cluster are then integrated using a bidirectional cross-attention (BCA) mechanism and a temporal convolutional network (TCN), effectively modeling long-term inter-cluster interactions and global signal dependencies. Finally, a multilayer perceptron (MLP) is used to classify valence and arousal levels. Experiments on three public EEG datasets demonstrate that the proposed model significantly outperforms existing EEG-based dimensional emotion recognition methods. Cluster-based learning, reflecting electrode proximity and signal distribution, effectively captures structural patterns at the electrode-cluster level, while inter-cluster information integration further captures global signal interactions, thereby enhancing the interpretability and physiological validity of EEG-based dimensional emotion analysis. This approach provides a scalable framework for future affective computing and brain–computer interface (BCI) applications.

  • Research Article
  • 10.1109/tcyb.2025.3625166
A Novel Agent-Based Approach for Dynamic Emotion Modeling in Social Networks.
  • Nov 7, 2025
  • IEEE transactions on cybernetics
  • Xiaokun Wu + 4 more

In a socially tense environment with rising emotional pressure, understanding the spread patterns of group emotions-particularly negative emotions-is crucial for identifying social risks. Extensive research has explored emotion contagion, often using propagation models where node state transitions rely on preset probabilities. However, these methods introduce randomness, making them less reflective of real-world dynamics by failing to capture individual node behaviors and interactions in emotional networks. To address this, our study introduces a novel approach integrating text-based emotion recognition with propagation models, reconstructing emotion contagion at an individual level. This model enhances traditional nodes with multihop agents driven by text emotion analysis, where agents record and respond to neighbors' emotional states. As a result, emotion spread becomes a deterministic process, with individualized infection rates reflecting node variability. We categorized nodes based on emotional states, creating corresponding agent types to form the dynamic agent-based emotion model (AEmo). Tests on real-world and scale-free networks show this method effectively predicts group negative emotion spread and provides insight into individual emotion evolution, validating the model's effectiveness.

  • Research Article
  • 10.3390/f16111693
The Psychophysiological Interrelationship Between Working Conditions and Stress of Harvester and Forwarder Drivers—A Study Protocol
  • Nov 6, 2025
  • Forests
  • Vera Foisner + 4 more

(1) Background: Austria’s use of fully mechanized harvesting systems has been continuously increasing. Technical developments, such as traction aid winches, have made it possible to drive on increasingly steep terrain. However, this has led to challenges and potential hazards for the operators, resulting in higher stand damage rates and risks of workplace accidents. Since these systems and working environments involve a highly complex interplay of various parameters, the purpose of this protocol is to propose a new set of methodologies that can be used to obtain a holistic interpretation of the psychophysiological interrelationship between the working conditions and stress of harvester and forwarder drivers. (2) Methods: We developed a research protocol to analyse the (a) environmental and (b) machine-related parameters; (c) psychological and psychophysiological responses of the operators; and (d) technical outcome parameters. Within this longitudinal exploratory field study, experienced drivers were monitored for over an hour at the beginning and the end of their workday while operating in varying steep terrains with and without a traction aid winch. The analysis is based on macroscopic (collected using cameras), microscopic (eye-tracking glasses and AI-driven emotion recognition), quantitative (standardized questionnaires), and qualitative (interviews) data. This multimodal research protocol aims to improve the health and safety of forest workers, increase their productivity, and reduce damage to remaining trees.

  • Research Article
  • 10.1145/3774880
Amd'SaEr: Arabic Multimodal Dataset for Sentiment Analysis and Emotion Recognition
  • Nov 5, 2025
  • ACM Transactions on Asian and Low-Resource Language Information Processing
  • Abdelhamid Haouhat + 3 more

Multimodal sentiment analysis and emotion recognition have attracted significant interest in multimodal learning. Naturally, humans express their feelings and emotions through nuanced expressions across various verbal and non-verbal modalities. Despite this, there remains a critical gap in publicly accessible multimodal datasets for the Arabic language. To address this issue, we posited that creating a large and high-quality Arabic multimodal dataset would significantly improve sentiment analysis and emotion recognition in Arabic contexts. We aimed to develop a large, high-quality Arabic Multimodal Sentiment Analysis and Emotion Recognition (A md ’S a E r ) dataset by building upon our AMSA dataset, increasing its size to 1037 samples, and adding emotional labels. Leveraging a novel methodology, we carefully selected and annotated data across audio, text, and visual modalities, and proposed a hybrid inter-annotator agreement strategy. Extensive analyses were conducted to validate the robustness of the dataset. We experimented with the A md ’S a E r dataset using a customized MERBench framework, which demonstrated the dataset’s efficacy and reliability. Our findings indicate the high quality of the dataset and underscore the importance of multimodal context for accurate sentiment analysis and emotion recognition in Arabic. We recommend further research and application of the A md ’S a E r dataset in broader Arabic contexts, as it provides a valuable resource for advancing multimodal analysis in this language.

  • Research Article
  • 10.3390/bdcc9110280
Cross-Dataset Emotion Valence Prediction Approach from 4-Channel EEG: CNN Model and Multi-Modal Evaluation
  • Nov 5, 2025
  • Big Data and Cognitive Computing
  • Vladimir Romaniuk + 1 more

Emotion recognition based on electroencephalography (EEG) has gained significant attention due to its potential applications in human–computer interaction, affective computing, and mental health assessment. This study presents a convolutional neural network-based approach to emotion valence prediction model development using 4-channel headband EEG data as well as its evaluation based on computer vision emotion valence recognition. We trained a model on the publicly available FACED and SEED datasets and tested it on a newly collected dataset recorded using a wearable BrainBit headband. The model’s performance is evaluated using both standard train–validation–test splitting and a leave-one-subject-out cross-validation strategy. Additionally, the model is evaluated on computer vision-based emotion recognition system to assess the reliability and consistency of EEG-based emotion prediction. Experimental results demonstrate that the CNN model achieves competitive accuracy in predicting emotion valence from EEG signals, despite the challenges posed by limited channel availability and individual variability. The findings show the usability of compact EEG devices for real-time emotion recognition and their potential integration into adaptive user interfaces and mental health applications.

  • Research Article
  • 10.53941/tai.2025.100018
RPGCN-GDA: Regionally Progressive Graph Convolutional Network with Gender-Sensitive Domain Adaptation for EEG Emotion Recognition
  • Nov 5, 2025
  • Transactions on Artificial Intelligence
  • Wei Zhong + 5 more

Numerous studies have demonstrated that gender-specific emotional patterns are prevalent and can be reflected in electroencephalography (EEG) signals. However, most existing EEG-based emotion recognition models fail to fully account for these gender differences, leading to limited generalization performance. To address this problem, this paper proposes a regionally progressive graph convolutional network with gender-sensitive domain adaptation (RPGCN-GDA). Grounded in prior information of gender differences, the proposed model is expected to flexibly capture gender-specific connectivity patterns across functional brain regions using a progressive graph structure. By fully fusing hierarchical emotional features and adaptively adjusting distributional differences between genders, our model performs remarkable generalization capabilities in both cross-subject and cross-gender emotion recognition tasks. The experiment results on public datasets demonstrate that the model not only excels in subject-dependent and subject-independent tasks but also shows significant advantages in handling gender-specific emotional responses, offering a promising new direction for developing higher gender-sensitive emotion recognition systems.

  • Research Article
  • 10.3390/fi17110509
Frame and Utterance Emotional Alignment for Speech Emotion Recognition
  • Nov 5, 2025
  • Future Internet
  • Seounghoon Byun + 1 more

Speech Emotion Recognition (SER) is important for applications such as Human–Computer Interaction (HCI) and emotion-aware services. Traditional SER models rely on utterance-level labels, aggregating frame-level representations through pooling operations. However, emotional states can vary across frames within an utterance, making it difficult for models to learn consistent and robust representations. To address this issue, we propose two auxiliary loss functions, Emotional Attention Loss (EAL) and Frame-to-Utterance Alignment Loss (FUAL). The proposed approach uses a Classification token (CLS) self-attention pooling mechanism, where the CLS summarizes the entire utterance sequence. EAL encourages frames of the same emotion to align closely with the CLS while separating frames of different classes, and FUAL enforces consistency between frame-level and utterance-level predictions to stabilize training. Model training proceeds in two stages: Stage 1 fine-tunes the wav2vec 2.0 backbone with Cross-Entropy (CE) loss to obtain stable frame embeddings, and stage 2 jointly optimizes CE, EAL and FUAL within the CLS-based pooling framework. Experiments on the IEMOCAP four-class dataset demonstrate that our method consistently outperforms baseline models, showing that the proposed losses effectively address representation inconsistencies and improve SER performance. This work advances Artificial Intelligence by improving the ability of models to understand human emotions through speech.

  • Research Article
  • 10.1007/s11571-025-10364-5
Emotion recognition using spatially unidimensional self-attention with fusion feature of brain effective connectivity network and spectral power.
  • Nov 4, 2025
  • Cognitive neurodynamics
  • Tingwei Jiang + 3 more

Electroencephalogram(EEG)-based emotion recognition is crucial for advancing human-computer interaction (HCI), and brain network features have become a key research focus. While existing methods often concatenate brain network features with traditional single-channel features to enhance recognition performance, this direct concatenation undermines the spatial information of brain networks and hinders effective application of deep learning. In this work, we propose a novel feature fusion strategy that effectively combines two-dimensional brain effective connectivity (BEC) network features with one-dimensional spectral power features while preserving spatial information. To leverage the spatial topological properties of brain networks and the one-dimensional correlations in fused features, we further introduce a Dual-channel 1D-CNN based on Spatially Unidimensional Self-Attention (SAD-1D-CNN), designed to extract discriminative features by capturing spatial correlations within the combined data. Results show 90.61% accuracy on SEED and 82.13% on SEED-IV (2.68% higher than state-of-the-art). Comprehensive tests confirm the superiority of our fusion strategy and SAD-1D-CNN in emotion recognition. Parameter visualization reveals the attention module's ability to automatically focus on emotion-related core brain regions, and ablation experiments validate the necessity of each network module. These findings offer new perspectives for advancing emotion recognition research.

  • Research Article
  • 10.1038/s41598-025-22525-x
Empowering people with intellectual disabilities using integrated deep learning architecture driven enhanced text-based emotion classification
  • Nov 4, 2025
  • Scientific Reports
  • Mohammed Abdullah Al-Hagery + 3 more

Emotion recognition is an important research field including psychology, healthcare, and human-computer interaction (HCI). However, conventional techniques mainly rely on textual analysis and facial expressions, and they also have potential flaws, making them unreliable. Textual language is the most common carrier of human emotions and its analysis relies on the available data. In natural language processing (NLP), textual emotion recognition (TER) has become a significant area of research due to its essential commercial and academic applications. With the growth of deep learning (DL) technologies, TER has seen a growing interest and has undergone considerable upgrades recently. This paper proposes an Intelligent Emotion Recognition from Text Using a Hybrid Deep Learning Model and Word Embedding Process (IERT-HDLMWEP) model. The aim is to develop a DL-based system for accurate text emotion recognition to support communication for people with disabilities. Initially, the text pre-processing stage involves several typical steps to develop the analysis and minimize the dimensionality of the input data. The IERT-HDLMWEP method creates a hybrid feature representation by integrating pre-trained Word2Vec vectors weighted by TF-DF category distribution and enriched with Part-of-Speech features to improve emotion detection in text. Finally, the hybrid of a convolutional neural network and a bidirectional gated recurrent unit with an attention mechanism (C-BiG-A) technique is employed for the classification process. A comprehensive simulation was implemented to verify the performance of the IERT-HDLMWEP method in emotion detection from the Text dataset. The empirical outcomes indicated that the IERT-HDLMWEP methodology emphasized improvement over other existing techniques.

  • Research Article
  • 10.1007/s11571-025-10366-3
SFT-HN: a novel spatial-frequency-temporal hybrid network for EEG-based emotion recognition.
  • Nov 4, 2025
  • Cognitive neurodynamics
  • Lei Zhu + 4 more

Electroencephalograph (EEG) emotion recognition is a key task in the brain-computer interface(BCI) field. A mounting quantity of studies have shown that deep learning methods for emotion recognition exhibit superior performance compared to traditional techniques. However, it is still challenging to fuse the EEG's Spatial, Frequency and Temporal information, as well as how to make full use of discriminative local patterns among the features for different emotions. To address these issues, a novel hybrid model called Spatial-Frequency-Temporal Hybrid Network(SFT-HN) is proposed. This model includes three Spatial Frequency Residual Modules (SFRM) and an attention-based Bidirectional Long Short-Term Memory (ATBI-LSTM). The former module extracts spatial-frequency features, while the latter learns temporal contexts. SFT-HN is trained to seize the complementarity among the spatial-frequency-temporal information and adaptively explore discriminative local patterns. Specifically, 4D representations are created from raw EEG signals to preserve spatial, frequency, and temporal information. The SFRM module then adopts split-convert-merge techniques, residual and attention mechanisms to enhance its spatial-frequency feature extraction ability for each input 4D representation tensor time slice. Moreover, an attention-enhanced mechanism is incorporated into a bidirectional LSTM module to capture the crucial temporal dependencies among the extracted features, thereby enhancing the discriminative power of the EEG features. The proposed method attains average accuracies of 97.61% and 97.57% for arousal-based and valence-based classification on the DEAP dataset, respectively. On SEED dataset, the method achieves average accuracy of 97.44%. Furthermore, we validate the robust generalization of our proposed model on a novel dataset, FACED, achieving an average accuracy of 96.24%. The model code is available at: https://github.com/AllGGI/SFT-HN-model.

  • Research Article
  • 10.54531/colr9799
A62 Teaching Hot Debriefing to Paediatric Resident Doctors: Cultivating a Culture of Reflection and Psychological Safety
  • Nov 4, 2025
  • Journal of Healthcare Simulation
  • Sabah Hussain + 2 more

Introduction: In high-pressure clinical environments, fostering a culture that encourages reflection, learning, and emotional wellbeing is essential. Hot debriefing offers an immediate, structured opportunity for teams to reflect on critical events, strengthen communication, and embed psychological safety into regular practice [1]. This teaching session aimed to educate resident paediatric doctors on the importance of a hot debrief and introduce relevant models that supports cultural transformation by normalising reflective practice. Methods: A multidisciplinary teaching session was delivered to 25 resident paediatric doctors, focusing on the practical application of hot debriefing. The session included a structured approach and a set of practical tools for initiating team-based hot debriefs. Through the use of videos and simulations we were able to embed principles of psychological safety, emotional recognition, and inclusive dialogue. In order to facilitate real-time feedback, gather the thoughts of the resident doctors and enable a collaborative environment we utilised Slido within this session. Pre- and post-session surveys were used to assess changes in experience and confidence, and to identify future training needs. Qualitative comments were collected to capture perceived cultural and emotional impact. Results: Pre-course data showed that 80% of participants had little or no prior experience with hot debriefing. Following the session, 84% reported feeling moderately or much more confident in asking for a debrief. Additionally, 84% expressed interest in receiving further training on how to lead debriefs. Qualitative feedback consistently highlighted a shift in attitude toward team communication and support, with participants valuing the normalisation of discussing emotional responses. Many viewed the session as a catalyst for change, helping to challenge existing cultural norms around silence after difficult events and learning from these. Discussion: The introduction of hot debriefing as both a concept and a structured practice contributed to a visible cultural shift within clinical teams. Rather than treating debriefs as optional or exceptional, the session repositioned them as integral to team-based care and resilience. By normalising immediate reflection, hot debriefing supports a compassionate, safety-oriented culture that prioritises emotional well-being alongside clinical outcomes. As healthcare organisations aim to address burnout, improve safety, and foster inclusive team dynamics, scalable interventions like hot debriefing can serve as foundational tools to drive cultural transformation from the ground up [2]. Going forward, we would like to deliver these sessions to all paediatric resident doctors and incorporate more simulation-based education within it to enhance a team culture that supports open communication, compassion, and continuous learning. Ethics Statement: As the submitting author, I can confirm that all relevant ethical standards of research and dissemination have been met. Additionally, I can confirm that the necessary ethical approval has been obtained, where applicable.

  • Research Article
  • 10.7717/peerj-cs.3301
Empowering cognitive disabilities in transit: an explainable, emotion-aware ITS framework
  • Nov 4, 2025
  • PeerJ Computer Science
  • Malik Almaliki + 5 more

People with disabilities need ongoing support and a balanced lifestyle. Smart cities like NEOM are emerging worldwide. The Saudi government has implemented several disability accessibility programs in public spaces and transportation. This article addresses a critical yet often neglected challenge: accurately recognizing and interpreting facial emotions in individuals with cognitive disabilities to foster better social integration. Current emotion detection systems frequently overlook the unique needs of this demographic (slower response times, difficulty interpreting subtle cues, and varied attention spans) and provide limited transparency, undermining trust and hindering real-time applicability in complex, dynamic contexts. To overcome these limitations, we present a novel, comprehensive framework that utilizes the Internet of Things, fog computing, and advanced You Only Look Once (YOLO)v8-based deep learning models. Our approach incorporates adaptive feedback mechanisms to tailor interactions to each user’s cognitive profile, ensuring accessible, user-centric guidance in diverse real-world scenarios. Besides, we introduce EigenCam-based explainability techniques, which offer intuitive visualizations of the decision-making process, enhancing interpretability and trust for both users and caregivers. Seamless integration with assistive technologies, including augmented reality devices and mobile applications, further supports real-time, on-the-go interventions in therapeutic and educational contexts. Experimental results on benchmark datasets (RAF-DB, AffectNet, and CK+48) demonstrate the framework’s robust performance, achieving up to 95.8% accuracy and excelling under challenging conditions. The EigenCam outputs confirm that the model’s attention aligns with meaningful facial features, reinforcing the system’s interpretability and cultural adaptability. By delivering accurate, transparent, and context-aware emotion recognition tailored to cognitive disabilities, this research sets a promising step for inclusive artificial intelligence (AI)-driven solutions, ultimately promoting independence, reducing stigma, and improving quality of life.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers