Discovery Logo
Sign In
Search
Paper
Search Paper
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Acoustic Cues
  • Acoustic Cues
  • Communication Cues
  • Communication Cues
  • Behavioral Cues
  • Behavioral Cues
  • Perceptual Cues
  • Perceptual Cues
  • Linguistic Cues
  • Linguistic Cues
  • Social Cues
  • Social Cues
  • Audio Cues
  • Audio Cues

Articles published on Vocal cues

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
435 Search results
Sort by
Recency
  • Research Article
  • 10.3390/s26041210
EEG-Based Emotion Estimation Model Integrating Structural and Time-Series Information Based on Deep Learning Architecture Optimization.
  • Feb 12, 2026
  • Sensors (Basel, Switzerland)
  • Kota Tsuji + 2 more

Emotion recognition is increasingly important for applications in mental health and personalized marketing. Traditional methods based on facial and vocal cues lack robustness due to voluntary control, motivating the use of EEG signals that capture neural dynamics with high temporal resolution. Existing EEG-based approaches using CNNs and LSTMs have improved spatial and temporal feature extraction; however, they still face critical limitations. These models struggle to represent electrode connectivity and adapt to inter-individual variability, and their architectures are typically handcrafted, requiring extensive manual tuning of hyperparameters and structural design. Such constraints hinder scalability and personalization, highlighting the need for automated architecture optimization. To address these challenges, we propose a dual-pipeline architecture that integrates frequency-domain and time-domain EEG features. The frequency-domain branch employs a Graph Convolutional Network (GCN) to model spatial relationships among electrodes, while the time-domain branch uses LSTM enhanced with Channel Attention to emphasize subject-specific informative channels. Furthermore, we introduce Differentiable Architecture Search (DARTS) to automatically discover optimal architectures tailored to individual EEG patterns, significantly reducing search cost compared to manual tuning. Experimental results demonstrate that our framework achieves competitive accuracy and high adaptability compared to state-of-the-art baselines, marking the first integration of GCN, LSTM, channel attention, and architecture search for EEG-based emotion recognition.

  • Research Article
  • 10.1080/23279095.2026.2624602
Emotion recognition in patients with temporal and frontal lobe epilepsy and its relationship with perceived social functioning
  • Feb 4, 2026
  • Applied Neuropsychology: Adult
  • Staša Lalatović + 4 more

Objective The study examined emotion recognition (ER) across visual and auditory modalities in patients with temporal lobe epilepsy (TLE) and frontal lobe epilepsy (FLE), and explored associations with perceived social functioning (SF). Method Fifty patients (30 TLE, 20 FLE) and 50 healthy controls (HC) completed tasks assessing recognition of facial emotion and emotional prosody across seven emotions: neutral, happiness, surprise, anger, disgust, fear, and sadness. Patients also completed the Social Functioning subscale of the Quality of Life in Epilepsy Inventory-31 (QOLIE-31) and self-report questionnaires assessing affective symptoms. Results Both TLE and FLE groups exhibited overall ER deficits across modalities compared to HC, with performance varying by emotion. TLE participants showed difficulties in recognizing fear and disgust across both modalities, whereas FLE participants were impaired in auditory recognition of these emotions and visual recognition of fear. Emotion differentiation impairments were relatively comparable across epilepsy types and modalities. Although groups did not differ in their relative performance across modalities, subsequent correlational analyses revealed a modest association between modalities in patients, but not controls. Within the patient group, the only significant association with perceived SF emerged for recognition of neutral prosodic features in the FLE group. Conclusion Individuals with TLE and FLE experience difficulties recognizing emotions from both facial expressions and vocal cues, especially those with negative valence. Limited associations between ER and perceived SF were observed only in FLE patients. The findings underscore the importance of assessing sociocognitive functioning in PWE.

  • Research Article
  • 10.22214/ijraset.2026.76923
A Multimodal Virtual Psychiatrist Interviewer and Mental Health Screener
  • Jan 31, 2026
  • International Journal for Research in Applied Science and Engineering Technology
  • Suresh Yeresime

The global increase in mental-health conditions such as anxiety, depression, and stress highlights the need for accessible and timely psychological evaluation. Conventional evaluation remains limited due to clinician shortages and the stigma associated with seeking help. This work presents a multimodal Virtual Psychiatrist Interviewer designed to facilitate adaptive and scalable early-stage mental-health screening. The proposed framework integrates DistilBERT for linguistic interpretation, a convolutional audio-emotion model to analyze vocal cues, and V2Face-based facial-affect recognition for visual understanding. An attention-driven fusion mechanism combines text, acoustic, and facial embeddings to capture complementary behavioral signals and produce robust preliminary assessments. The system is trained and evaluated on a curated mental health text dataset, the RAVDESS emotional speech corpus, and publicly available facial expression datasets. Experimental results demonstrate competitive performance on anxiety, depression, and stress detection tasks, while ablation studies confirm the contribution of each modality. The findings indicate the potential of the proposed system for real-time AI-assisted mental-health support

  • Research Article
  • 10.1007/s10803-025-07212-0
Differential Associations of Pitch Discrimination and Rapid Auditory Processing With Emotional Prosody Recognition in Autistic and Non-autistic Children.
  • Jan 17, 2026
  • Journal of autism and developmental disorders
  • Ming Lui + 4 more

Vocal cues embed speech with crucial emotional expression. Recognizing subtle changes in intonation, pitch and prosody provides rich social information and cues for responding in everyday interactions - cues that may be missed by individuals with differences in sensory processing and social development, such as those with autism. Although atypical auditory processing in autism is well-established in the literature, the contribution of these sensory differences to emotional prosody recognition requires further investigation. This study examined whether the associations of auditory abilities and social cognition with emotional prosody recognition differ between autistic and non-autistic children. Twenty-eight autistic children and twenty-eight non-autistic children completed tasks assessing rapid auditory processing (RAP), pitch discrimination, social cognition (SC), and emotional prosody recognition (EPR) of spoken words and sentences. Autistic children demonstrated better RAP but lower SC performances compared to non-autistic children. No group differences were found in pitch discrimination or emotional prosody recognition. Across both groups, better RAP was associated with better emotional prosody recognition. In contrast, pitch discrimination was positively associated with emotional prosody recognition of low-intensity emotional words only in autistic children. The findings highlight the important association between RAP and emotional prosody recognition in both autistic and non-autistic children, while indicating a distinct association between pitch discrimination and emotional prosody recognition in autistic children. The results suggest the need for further research into the role of auditory processing in emotional speech perception in autism, and the potential benefits of interventions targeting pitch discrimination and RAP.

  • Research Article
  • 10.1037/emo0001626
The developmental changes in emotion recognition from human biological motion by children aged from 4 to 12 years.
  • Jan 15, 2026
  • Emotion (Washington, D.C.)
  • Elliot Riviere + 2 more

Most research on the development of emotion recognition has focused on facial expressions, leaving a relative gap in our understanding of how children interpret emotions through body movements. This study examined developmental changes in the ability to recognize basic emotions (joy, anger, fear, and sadness) from human biological motion presented in point-light displays (HBM-PLDs), with particular attention to how these changes vary depending on the type of emotion and age. One hundred twenty-eight preschool and primary school children aged 4-12 years participated in two experimental tasks involving the explicit recognition of emotions from HBM-PLDs. The results highlight a clear developmental progression in the recognition of emotions from HBM-PLDs with increasing age. This developmental change appears to follow a curvilinear trajectory, with an inflection point around 8.5 years of age (100 months). However, the study further reveals that this inflection point differs depending on the specific discrete emotion considered. Joy seems to be recognized as early as age 4, followed by anger between ages 5 and 6, sadness between ages 6 and 7.5, and finally fear after age 9-10. This represents an important contribution, demonstrating that the improvement in emotion recognition from body movement is not homogeneous but modulated according to the discrete emotion. These findings support the idea that the development of discrete emotion recognition is independent of the modality of presentation (facial expressions, body movements, vocal cues, etc.) and suggest that emotion recognition may rely on a modality-independent and unified developmental process. (PsycInfo Database Record (c) 2026 APA, all rights reserved).

  • Research Article
  • 10.1002/pchj.70072
From Physique to Feelings: Deciphering the Body–Jealousy Connection in Women's Responses to Feminine Vocal Cues
  • Jan 4, 2026
  • PsyCh Journal
  • Cairang Guanque + 5 more

ABSTRACTJealousy typically emerges when individuals sense that their romantic relationships may be threatened by others who display characteristics indicative of high mate quality. Previous research has found that in contexts of intrasexual competition, feminine female voices indicate high mate value and elicit stronger jealousy responses from other women. However, studies on individual differences in jealousy sensitivity are limited. Body size is an important factor that influences women's mating behavior. In the current study, we investigated the effect of women's height, weight, and body mass index (BMI) on their jealousy sensitivity to other women's vocal femininity. Results showed that women perceived more feminine voices as more jealousy‐inducing, and this effect was modulated by body size. Taller women demonstrated heightened sensitivity to vocal changes in pitch and formants, while slimmer women and those with a lower BMI showed increased sensitivity to pitch variations in competitive scenarios. These findings indicate that body size significantly shapes individual differences in jealousy sensitivity during intrasexual competition. Our study supports the mate quality–jealousy hypothesis, highlighting how traits perceived as indicators of higher mate quality amplify jealousy responses. The current research extends the literature on vocal cues and attractiveness by demonstrating how these factors influence emotional reactions such as jealousy.

  • Research Article
  • 10.36222/ejt.1761640
Performance Analysis of YAMNet and VGGish Networks for Emotion Recognition from Audio Signals
  • Dec 31, 2025
  • European Journal of Technic
  • Yunus Korkmaz

Understanding human emotions through vocal cues is a key point for developing emotionally intelligent systems, particularly in fields such as human-computer interaction, healthcare, and virtual assistants. However, accurately recognizing emotions from speech remains a challenging task due to the variability in speaker traits, acoustic conditions, and the subtle, often overlapping nature of emotional states. In this study, a comparative analysis of transfer learning methods for speech emotion recognition (SER) was presented by employing pretrained audio-based neural networks. Specifically, YAMNet and VGGish models were employed both as static feature extractors and in a fine-tuning setup. The extracted embeddings were classified using traditional machine learning algorithms, including Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Random Forests (RF), and Logistic Regression (LR). Experiments were conducted on two widely used emotional speech datasets: RAVDESS and EmoDB. The results demonstrate that VGGish consistently outperforms YAMNet in both feature extraction and fine-tuning scenarios. The highest classification accuracy was achieved using VGGish features with LR on EmoDB (73.83%). Additionally, fine-tuning VGGish on EmoDB yielded a competitive accuracy of 72.90%. Also class-specific analysis showed that the highest AUC score of 0.9635 was obtained using the LR in VGGish + EmoDB setting, while fine-tuning both YAMNet and VGGish with EmoDB dataset has reached up to Recall score of 1 for the ‘Sadness’ emotion.

  • Research Article
  • 10.33650/ijoeel.v7i2.12655
Students' perceptions on the Use of Paralanguage in EFL Classrooms
  • Dec 15, 2025
  • International Journal of English Education and Linguistics (IJoEEL)
  • Alfian Sandy William Kok + 3 more

This study examined students’ perceptions of English language teachers’ use of paralanguage in schools on the border between Indonesia and Timor Leste, specifically examining its impact on students’ engagement and comprehension in an English as a Foreign Language (EFL) environment. Paralanguage—encompassing vocal qualities such as pitch, pitch, volume, and rate of speech—plays a critical yet often overlooked role in effective communication, especially in a language learning environment where non-verbal cues can significantly impact students’ comprehension. Through qualitative methods, including observation and semi-structured interviews with students at SMA Fides Quarens Intellectum and SMA Negeri 2 Kefamenanu, this study investigated how students interpreted their teachers’ paralinguistic cues and how these cues influenced their learning experiences. The findings revealed that students perceived paralanguage as a valuable tool for enhancing clarity and emotional connection in the classroom, aiding their comprehension and encouraging active participation. Elements such as pitch modulation and vocal emphasis were shown to help students understand important information and stay engaged, while balanced vocal qualities created a supportive and engaging classroom atmosphere. Furthermore, this study highlights the role of paralanguage in reducing student anxiety and increasing self-confidence, which is particularly relevant in cross-cultural educational settings where language barriers and cultural differences can hinder effective communication. These insights underscore the importance of incorporating paralanguage awareness into teacher training programs, suggesting that the intentional use of vocal cues can enhance instructional effectiveness and support student language acquisition.

  • Research Article
  • 10.33423/jmpp.v26i3.7988
“Trust Me”: The Surprising Role of Vocal Characteristics in Trust Formation
  • Dec 6, 2025
  • Journal of Management Policy and Practice
  • Mark H Phillips + 1 more

Trust plays a pivotal role in organizational life, yet its formation under conditions of limited interaction remains poorly understood. This study investigates how vocal characteristics influence initial trust formation when individuals must make rapid judgments based on vocal characteristics of pitch, variability, and perceived integrity. In a laboratory simulation, participants evaluated recruiters based solely on voice messages, then made trust decisions with financial implications. Vocal pitch significantly influenced perceived ability, and interpersonal liking predicted all three trustworthiness dimensions, underscoring the importance of vocal cues in early organizational encounters. Implications for virtual teams, hiring and leadership communication are discussed.

  • Research Article
  • 10.3390/jintelligence13120159
Bridging Text and Speech for Emotion Understanding: An Explainable Multimodal Transformer Fusion Framework with Unified Audio–Text Attribution
  • Dec 3, 2025
  • Journal of Intelligence
  • Ashutosh Pandey + 2 more

Conversational interactions, rich in both linguistic and vocal cues, provide a natural context for studying these processes. In this work, we propose an explainable multimodal transformer framework that integrates textual semantics (via RoBERTa) and acoustic prosody (via WavLM) to advance emotion understanding. By projecting both modalities into a shared latent space, our model captures the complementary contributions of language and speech to affective communication, achieving an 0.83 accuracy value across five emotion categories. Crucially, we embed explainable AI (XAI) techniques including Integrated Gradients and Occlusion to attribute predictions to specific linguistic tokens and prosodic patterns, thereby aligning computational mechanisms with human cognitive processes of emotion perception. Beyond performance gains, this work demonstrates how multimodal AI systems can support transparent, human-centered emotion recognition.

  • Research Article
  • 10.1038/s41598-025-29889-0
Deep spectrotemporal network based depression severity estimation from speech.
  • Nov 28, 2025
  • Scientific reports
  • Ishana Jabbar + 3 more

Depression is a severe mental health disorder that profoundly affects individuals, characterized by persistent sadness, reduced enthusiasm, and impaired concentration, ultimately impacting daily life. Early and precise diagnosis is essential yet challenging, as traditional approaches rely heavily on subjective evaluations by mental health professionals, often resulting in delayed intervention. Recent advancements have explored the use of machine learning techniques to automatically estimate depression severity through speech analysis. Although prior methods have demonstrated effectiveness, there remains potential for further performance improvement. This paper introduces a novel deep spectrotemporal network designed to estimate depression severity scores from vocal cues. Specifically, we propose extracting holistic and localized spectral features using the pre-trained EfficientNet-B3 model from Mel spectrogram sequences and capturing spatiotemporal dynamics through our novel Volume Local Neighborhood Encoded Pattern (VLNEP) descriptor. Finally, a dual-stream transformer model is designed to effectively fuse and learn these extracted spectral and spatiotemporal features. Experimental results on the benchmark AVEC2013 and AVEC2014 datasets demonstrate the superiority of our proposed framework compared to state-of-the-art methods.

  • Research Article
  • 10.3389/fpsyg.2025.1668759
Uncovering interactive effects of affective voice tone and personality diversity on dyadic creativity
  • Nov 24, 2025
  • Frontiers in Psychology
  • Hiroyuki Sakai + 3 more

Creativity is a key driver of innovation and social progress. Research on creativity has identified a variety of factors that affect creativity at both individual and group levels. However, the interactive effects of these factors in creativity have not been fully investigated. Thus, the present study aimed to explore the interactive effects of affective voice tone and personality traits on creativity in acquainted dyads. Pairs of participants took part in an experiment in which they cooperated on a verbal creativity task via a video conferencing system that modulated affective voice tone and completed personality questionnaires. The results demonstrate that affective voice tone modulation interacts with personality diversity to shape dyadic creativity. Specifically, while voice tone alone did not alter creative performance, it significantly modulated the positive effect of personality heterogeneity, suggesting that emotional vocal cues can constrain the benefits of interpersonal diversity during collaboration. This is the first empirical evidence for an interactive effect of affective voice tone and personality heterogeneity on dyadic creativity in close relationships. In addition, this study offers valuable insights into designing mechanisms and systems that enhance co-creation, not only in human teams but also in collaborations between humans and artificial intelligence agents.

  • Research Article
  • 10.5617/nmi.12537
Multimodal Integration Challenges in Emotionally Expressive Child Avatars for Training Applications
  • Nov 19, 2025
  • Nordic Machine Intelligence
  • Pegah Salehi + 4 more

Dynamic facial emotion is essential for believable AI-generated avatars; however, most systems remain visually inert, limiting their utility in simulations such as virtual training for investigative interviews with abused children. We introduce and evaluate a real-time architecture fusing Unreal Engine 5 MetaHuman rendering with NVIDIA Omniverse Audio2Face to translate vocal prosody into high-fidelity facial expressions on photorealistic child avatars. Due to limitations in synthetic voice options, both avatars were voiced using a young adult female TTS model, selected from two different systems in an attempt to better match each character. This compromise introduces a confounding factor. Voice-age mismatches and prosodic or emotional differences may disrupt audiovisual alignment. We implemented a two-PC setup decoupling language and speech synthesis from GPU-intensive rendering, designed to support low-latency interaction in desktop and VR environments. A between-subjects study ($N=70$) using audio+visual and visual-only conditions assessed perceptual impacts as participants rated emotional clarity, facial realism, and empathy for two avatars expressing joy, sadness, and anger. Our results show that while avatars could express emotions recognizably, particularly sadness and joy, recognition of anger dropped markedly without audio, highlighting the role of vocal cues in conveying high-arousal states. Interestingly, silencing the clips improved perceived realism by removing mismatches between facial animation and voice, especially where age or emotional tone were incongruent. These findings highlight that perceived believability hinges on the interplay between audiovisual congruence and facial geometry: a mismatched voice can undermine even well-crafted expressions, while a congruent one can enhance weaker visuals. This trade-off presents an ongoing challenge for designing emotionally coherent avatars in sensitive training contexts.

  • Research Article
  • 10.1016/j.socscimed.2025.118558
The sound of emergency: The role of vocal cues in healthcare.
  • Nov 1, 2025
  • Social science & medicine (1982)
  • Arianna Bagnis + 6 more

The sound of emergency: The role of vocal cues in healthcare.

  • Research Article
  • 10.1016/j.isci.2025.113858
Vocal smile is recognized but not embodied in autistic adults
  • Oct 25, 2025
  • iScience
  • Annabelle Merchie + 7 more

Vocal smile is recognized but not embodied in autistic adults

  • Research Article
  • 10.1044/2025_jslhr-24-00849
On How Vocal Cues Impact Dynamic Credibility Judgments: Mouse-Tracking Paradigm Examining Speaker Confidence and Gender Through Voice Morphing.
  • Oct 9, 2025
  • Journal of speech, language, and hearing research : JSLHR
  • Zhikang Peng + 2 more

This study aimed to explore how vocal cues of confidence and gender influence the dynamic mechanisms involved in reasoning about speaker credibility. Using a mouse-tracking paradigm, 52 participants evaluated speaker credibility based on semantically neutral statements that varied in morphed levels of gender (Experiment 1) and confidence (Experiment 2). Participants' mouse trajectories and reaction times were recorded to assess their credibility judgments. The findings revealed that perceived confidence significantly impacted credibility judgments and mouse trajectories, while gender did not. Higher levels of perceived confidence resulted in more credible assessments, demonstrated by direct mouse trajectories and quicker reaction times. Moreover, mouse trajectories reflected cognitive mediation effects between confidence and credibility judgments, indicating that vocal cues influence both the final judgments and the dynamic inference process during speaker credibility assessment. The study highlights the critical role of vocal cues, particularly confidence, in shaping perceptions of speaker credibility. It suggests that these vocal cues not only affect final credibility judgments but also play a significant role in the dynamic reasoning process involved in social inference. https://doi.org/10.23641/asha.30265942.

  • Research Article
  • 10.1016/j.jvoice.2025.10.002
Contextual Effects of Vocal Pitch on Trust Perception in Aviation Settings.
  • Oct 1, 2025
  • Journal of voice : official journal of the Voice Foundation
  • Erdoğdu Akça + 5 more

Contextual Effects of Vocal Pitch on Trust Perception in Aviation Settings.

  • Research Article
  • 10.31117/neuroscirn.v8i3.449
The role of intonation in designing machinery for mental sports psychology
  • Sep 23, 2025
  • Neuroscience Research Notes
  • Hui Ying Jong + 1 more

Intonation – the variation in pitch, rhythm, and stress in speech – plays a crucial role in cognitive and emotional regulation, particularly in the field of sports psychology. This mini-review examines the role of intonation in designing machinery for mental sports psychology, focusing on three core areas: neurocognitive mechanisms, technological integration, and psychophysiological responses. We explore how the brain processes intonation, how it influences motivation and attention in athletes, and how emerging technologies are incorporating vocal cues for performance optimisation. Neurocognitive research reveals that intonation engages bilateral cortical and subcortical pathways, influencing attention, memory encoding, and motivation regulation. The amygdala and auditory cortex process emotional prosody, while Self Determination Theory (SDT) and Neurovisceral Integration models highlight the motivational and stress-modulating effects of tone of voice. Technological advancements leverage AI-driven coaching, neurofeedback systems, and VR-based training to integrate adaptive vocal cues that regulate athletes' arousal levels. Biofeedback tools and voice analysis systems now track stress and cognitive load via vocal markers, enabling personalised mental training. On a psychophysiological level, intonation directly affects heart rate, respiratory function, and hormonal responses, influencing athletes’ readiness, stress resilience, and performance outcomes. Studies show that energising intonations enhance physical output, while calming tones reduce anxiety and improve decision-making under pressure. Structured vocal guidance in imagery training, relaxation techniques, and pre-performance routines optimises arousal modulation for peak performance. Despite growing interest, the literature lacks an integrative framework that explicitly connects intonation-driven vocal modulation with neurocognitive and psychophysiological mechanisms in sport-specific contexts. We propose a conceptual model linking intonation to cognitive and physiological optimisation, emphasising coach-athlete communication, voice-based feedback, and real-time stress tracking. Future research should explore individualised voice training, multimodal integration with movement, and neuroadaptive intonation technologies to refine mental performance strategies in sports.

  • Research Article
  • 10.1017/s0305000925100214
The acquisition of plain-emphatic consonant contrasts by Arabic-speaking children: An acoustic study.
  • Sep 1, 2025
  • Journal of child language
  • Anwar Alkhudidi + 4 more

Arabic emphatic consonants are claimed to be late-acquired, likely due to their motoric complexity, involving both coronal and pharyngeal/uvular constrictions. Children's production has largely been studied using impressionistic data, with limited acoustic analysis. This study acoustically examines the acquisition of emphatic consonants in Saudi-Hijazi Arabic-speaking children aged 3-6years. Thirty-eight children performed a real-word repetition task, after which consonantal and vocalic cues to the plain-emphatic contrast were measured. Results show that children produce both types of acoustic cues, with an age-related increase in the acoustic contrast and an overall alignment with adult patterns. Larger acoustic contrasts were found in vowels preceding rather than following consonants in word-medial positions, with no evidence for a difference between word-initial and word-final positions. The plain-emphatic contrast was greater for stops than fricatives and larger for female than male children. These findings are discussed in relation to the development of coarticulated consonants.

  • Research Article
  • 10.35629/5252-0709544552
The Role of Non-Verbal Communication in Enhancing Team Performance in Multicultural Organizations
  • Sep 1, 2025
  • International Journal of Advances in Engineering and Management
  • Prince Godswill Akhimien

This study examines the role of non-verbal communication, with emphasis on paralanguage, in enhancing team performance within multicultural organizations. Using a survey research design, data were collected from 245 respondents across diverse organizations, including staff from Ambrose Alli University, Ekpoma, Edo State. Descriptive and inferential analyses were conducted to test the hypothesis that paralanguage significantly enhances adaptability in multicultural settings. Descriptive statistics revealed high mean scores for paralanguage (M = 3.84, SD = 0.76) and adaptability (M = 3.67, SD = 0.82), suggesting that respondents strongly acknowledged their relevance to workplace communication. Likert scale responses further indicated consensus on the role of paralanguage in reducing misunderstandings, clarifying meaning, and facilitating team collaboration. The correlation analysis established a statistically significant and positive relationship between paralanguage and adaptability (r = .62, p < .01), implying that increased use of vocal cues such as tone, pauses, and emphasis strengthens adaptability in dynamic work environments. These findings align with prior empirical studies in Nigerian and global organizational contexts, reinforcing that paralanguage is a critical predictor of adaptability and team cohesion. The study concludes that non-verbal communication is not merely supplementary but integral to enhancing collaboration, trust, and performance in multicultural organizations. It recommends targeted training on paralanguage use as a strategy to foster adaptability and inclusiveness, thereby improving organizational effectiveness.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers