EMOTIVE SPEECH ACTS IN CROSS-CULTURAL COMMUNICATION: A COMPREHENSIVE ANALYSIS AND EXPERIMENTAL STUDY
The aim of the study is to determine the role of emotive speech acts in cross-cultural language learning environments, revealing the complex interplay between universal emotional markers and culturally specific expression patterns. In the course of the research, data analysis methods were applied (acoustic analysis, facial expression analysis using the Facial Action Coding System (FACS), lexical analysis, correlational and regression analysis). Through comprehensive analysis of acoustic features, facial expressions, and lexical patterns, the research demonstrates that emotional expression follows dual patterns: universal elements remain consistent across languages while others undergo significant cultural adaptation. Results indicate that language learners develop an “emotional interlanguage” that synthesizes native expression strategies with target language norms. Spanish learners exhibited greater facial expressiveness when expressing happiness, suggesting adoption of the target culture’s more overt emotional display rules. Anger was more explicitly verbalized across all language learning groups, indicating that different emotions utilize distinct channels of expression. Principal component analysis and hierarchical clustering revealed discrete emotional expression profiles across language groups, while multiple regression models identified predictive relationships between linguistic proficiency, cultural exposure, and emotional adaptation. Our findings support a nuanced theoretical model that integrates universalist and relativist perspectives on emotional expression, suggesting that language learners navigate a dynamic space between these poles. The research confirms that certain aspects of emotional expression — such as increased vocal intensity for anger and decreased speech rate for sadness — remain relatively consistent across language groups, supporting the universality hypothesis. However, other aspects — particularly facial expressiveness for happiness and lexical choices for emotional states — show significant adaptation to target language norms, supporting the cultural relativity perspective. Our data reveals that language learners develop what might be termed an “emotional interlanguage” — a dynamic system of emotional expression that incorporates elements from both their native emotional repertoire and the target language’s cultural norms. This emotional interlanguage evolves with increased language proficiency and cultural exposure, but the adaptation process varies across different channels of emotional expression and across different emotions. The finding that cultural familiarity mediates the relationship between language proficiency and emotional expressiveness suggests that emotional adaptation in language learning is not simply a function of linguistic knowledge, but requires deeper cultural learning and engagement.
1598
- 10.1016/s0167-6393(02)00084-5
- Jan 17, 2003
- Speech Communication
72
- 10.1037//0033-2909.128.2.203
- Jan 1, 2002
- Psychological Bulletin
3274
- 10.1037/t27734-000
- Jan 14, 2019
10220
- 10.1017/cbo9781139173438
- Jan 1, 1969
11914
- 10.1163/9789004368811_003
- Dec 12, 1975
9951
- 10.7208/chicago/9780226471013.001.0001
- Jan 1, 1987
68
- 10.1093/acprof:oso/9780190613501.003.0024
- May 18, 2017
292
- 10.1007/s10919-008-0065-7
- Jan 21, 2009
- Journal of Nonverbal Behavior
8732
- 10.1017/cbo9780511813085
- Feb 27, 1987
1444
- 10.1037/0033-2909.129.5.770
- Jan 1, 2003
- Psychological Bulletin
- Research Article
- 10.55959/msu2074-8132-25-1-12
- Feb 24, 2025
- Moscow University Anthropology Bulletin (Vestnik Moskovskogo Universiteta Seria XXIII Antropologia)
Introduction. The study of emotional facial expressions is currently gaining momentum, attracting researchers from diverse scientific disciplines. We suppose that this surge in interest can be attributed, in part, to the rapid advancement of digital technologies, particularly artificial neural networks, which are increasingly capable of recognizing and encoding facial expressions. The power of these technologies to analyze faces and emotional states is widely discussed in the media and popular culture, prompting scientists to approach the topic with both responsibility and maximum caution in judgements. Results. Important to mention immediately, that in the modern literature on the anatomy of facial expression, there is no consensus on the number and composition of muscles involved in the expression of emotions on the human face. Different authors indicate a different number of muscles involved in emotional facial expressions. Such discrepancies may cause significant confusion, especially for researchers who are not specialists in human anatomy. This article presents an analytical review based on anatomical sources and the Facial Action Coding System (FACS), a leading anatomically validated technique for recognizing and classifying facial expressions. Alongside the anatomy of the muscular system, we explore the characteristics of the related neural structures. To provide readers with a comprehensive understanding of facial communication, we delve into the history of its study and present an evolutionary journey tracing the development of the human face, the emergence, and evolution of facial expressions in phylogeny. Conclusion. Facial expressions of emotion are the result of a long evolutionary process, closely interrelated with the development of the nervous system and social organization. Based on the most comprehensive data, the muscular system underlying human emotional expressions is more complex than typically suggested in anatomical classifications. Overall, it comprises 26 paired and one single muscle, many of which are further subdivided into smaller parts with distinct expressive functions. We believe that this article will help to systematize modern data on the anatomy of human facial expressions. © 2025. This work is licensed under a CC BY 4.0 license
- Research Article
20
- 10.2147/ndt.s37174
- Jan 1, 2012
- Neuropsychiatric Disease and Treatment
BackgroundResearch shows that impairment in the expression and recognition of emotion exists in multiple psychiatric disorders. The objective of the current study was to evaluate the way that patients with schizophrenia and those with obsessive-compulsive disorder experience and display emotions in relation to specific emotional stimuli using the Facial Action Coding System (FACS).MethodsThirty individuals participated in the study, comprising 10 patients with schizophrenia, 10 with obsessive-compulsive disorder, and 10 healthy controls. All participants underwent clinical sessions to evaluate their symptoms and watched emotion-eliciting video clips while facial activity was videotaped. Congruent/incongruent feeling of emotions and facial expression in reaction to emotions were evaluated.ResultsPatients with schizophrenia and obsessive-compulsive disorder presented similarly incongruent emotive feelings and facial expressions (significantly worse than healthy participants). Correlations between the severity of psychopathological condition (in particular the severity of affective flattening) and impairment in recognition and expression of emotions were found.DiscussionPatients with obsessive-compulsive disorder and schizophrenia seem to present a similarly relevant impairment in both experiencing and displaying of emotions; this impairment may be seen as a chronic consequence of the same neurodevelopmental origin of the two diseases. Mimic expression could be seen as a behavioral indicator of affective flattening. The FACS could be used as an objective way to evaluate clinical evolution in patients.
- Book Chapter
- 10.4018/978-1-60960-541-4.ch008
- Jan 1, 2011
The aim of this chapter is to identify those face areas containing high facial expression information, which may be useful for facial expression analysis, face and facial expression recognition and synthesis. In the study of facial expression analysis, landmarks are usually placed on well-defined craniofacial features. In this experiment, the authors have selected a set of landmarks based on craniofacial anthropometry and associate each of the landmarks with facial muscles and the Facial Action Coding System (FACS) framework, which means to locate landmarks on less palpable areas that contain high facial expression mobility. The selected landmarks are statistically analysed in terms of facial muscles motion based on FACS. Given that human faces provide information to channel verbal and non-verbal communication: speech, facial expression of emotions, gestures, and other human communicative actions; hence, these cues may be significant in the identification of expressions such as pain, agony, anger, happiness, et cetera. Here, the authors describe the potential of computer-based models of three-dimensional (3D) facial expression analysis and the non-verbal communication recognition to assist in biometric recognition and clinical diagnosis.
- Research Article
147
- 10.1176/appi.ajp.162.1.92
- Jan 1, 2005
- American Journal of Psychiatry
Blunted affect is a major symptom in schizophrenia, and affective deficits clinically encompass deficits in expressiveness. Emotion research and ethological studies have shown that patients with schizophrenia are impaired in various modalities of expressiveness (posed and spontaneous emotion expressions, coverbal gestures, and smiles). Similar deficits have been described in depression, but comparative studies have brought mixed results. Our aim was to study and compare facial expressive behaviors related to affective deficits in patients with schizophrenia, depressed patients, and nonpatient comparison subjects. Fifty-eight nondepressed inpatients with schizophrenia, 25 nonpsychotic inpatients with unipolar depression, and 25 nonpatient comparison subjects were asked to reproduce facial emotional expressions. Then the subjects were asked to speak about a specific emotion for 2 minutes. Each time, six cross-cultural emotions were tested. Facial emotional expressions were rated with the Facial Action Coding System. The number of facial coverbal gestures (facial expressions that are tied to speech) and the number of words were calculated. In relation to nonpatient comparison subjects, both patient groups were impaired for all expressive variables. Few differences were found between schizophrenia and depression: depressed subjects had less spontaneous expressions of other-than-happiness emotions, but overall, they appeared more expressive. Fifteen patients with schizophrenia were tested without and with typical or atypical antipsychotic medications: no differences could be found in study performance. The patients with schizophrenia and the patients with depression presented similar deficits in various expressive modalities: posed and spontaneous emotional expression, smiling, coverbal gestures, and verbal output.
- Research Article
4
- 10.1111/srt.12864
- Apr 6, 2020
- Skin Research and Technology
There are few reports on the relationship between facial expression formation and mass of the muscle responsible for facial expression. We analyzed the facial expression using facial action coding system (FACS) and examined the muscle mass and characteristics of the facial expression muscles using multi-detector row computed tomography (MDCT) and magnetic resonance imaging (MRI). Moreover, the relation between these was statistically evaluated. Ten healthy women in their 40s (43.4±3.0years, 40-49) were enrolled. The expressive faces were analyzed by facial expression analysis software based on the FACS. The muscle mass and characteristics of the facial expression muscles were investigated using MDCT/MRI. The correlation between an integrated expression intensity value (IEIV) for FACS of the widest possible grin and muscle mass was analyzed. The mean values between the two categorized groups (G-1 and G-2) based on fat infiltration into the muscle were compared. A positive correlation is found between the IEIV and the muscle mass. The IEIV of G-1 is significantly larger than the corresponding value of G-2. Hence, the results indicated that the subjects with high IEIV and expressive face had thicker facial expression muscles and little fat infiltration into the muscles. Our objective imaging diagnostic study using FACS, MDCT, and MRI corroborated the anti-aging medical science about the facial expression muscles related to youthful facial appearance. The results of this research could contribute to the elucidation of the mechanisms involved in the facial aging process and to the development of cosmetology.
- Research Article
888
- 10.1109/34.598232
- Jul 1, 1997
- IEEE Transactions on Pattern Analysis and Machine Intelligence
We describe a computer vision system for observing facial motion by using an optimal estimation optical flow method coupled with geometric, physical and motion-based dynamic models describing the facial structure. Our method produces a reliable parametric representation of the face's independent muscle action groups, as well as an accurate estimate of facial motion. Previous efforts at analysis of facial expression have been based on the facial action coding system (FACS), a representation developed in order to allow human psychologists to code expression from static pictures. To avoid use of this heuristic coding scheme, we have used our computer vision system to probabilistically characterize facial motion and muscle activation in an experimental population, thus deriving a new, more accurate, representation of human facial expressions that we call FACS+. Finally, we show how this method can be used for coding, analysis, interpretation, and recognition of facial expressions.
- Research Article
98
- 10.1371/journal.pone.0169110
- Jan 9, 2017
- PLOS ONE
Background and aimParkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants.MethodsTwenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response.ResultsFor emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004).ConclusionsPD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives.
- Research Article
310
- 10.1037/npe0000028
- Dec 1, 2014
- Journal of Neuroscience, Psychology, and Economics
In this study, we validated automated facial coding (AFC) software—FaceReader (Noldus, 2014)—on 2 publicly available and objective datasets of human expressions of basic emotions. We present the matching scores (accuracy) for recognition of facial expressions and the Facial Action Coding System (FACS) index of agreement. In 2005, matching scores of 89% were reported for FaceReader. However, previous research used a version of FaceReader that implemented older algorithms (version 1.0) and did not contain FACS classifiers. In this study, we tested the newest version (6.0). FaceReader recognized 88% of the target emotional labels in the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) and Amsterdam Dynamic Facial Expression Set (ADFES). The software reached a FACS index of agreement of 0.67 on average in both datasets. The results of this validation test are meaningful only in relation to human performance rates for both basic emotion recognition and FACS coding. The human emotions recognition for the 2 datasets was 85%, therefore FaceReader is as good at recognizing emotions as humans. To receive FACS certification, a human coder must reach an agreement of 0.70 with the master coding of the final test. Even though FaceReader did not attain this score, action units (AUs) 1, 2, 4, 5, 6, 9, 12, 15, and 25 might be used with high accuracy. We believe that FaceReader has proven to be a reliable indicator of basic emotions in the past decade and has a potential to become similarly robust with FACS.
- Book Chapter
3
- 10.4018/978-1-4666-5966-7.ch008
- Jan 1, 2014
Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.
- Research Article
71
- 10.1016/j.jpsychores.2007.09.010
- Feb 19, 2008
- Journal of Psychosomatic Research
Impact of age on the facial expression of pain
- Book Chapter
- 10.4018/978-1-60960-024-2.ch002
- Jan 1, 2011
This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.
- Dissertation
- 10.22024/unikent/01.02.86872
- Mar 1, 2021
This thesis examines the role of facial mimicry during tasks of facial emotional expression recognition. The first study examines whether facial proprioception modulates the ability to recognise facial expressions, and/or facial mimicry. Results showed that, although mimicry was detected, participants' recognition ability was not modulated by their facial proprioceptive ability. Study 2 examines whether and how the presence of contextual information that are either congruent or incongruent with emotional facial expressions modulates the accuracy of the recognition of the expression and/or facial mimicry. Study 3 has a similar method and design to the second and includes both clear-cut and low-intensity emotional facial expressions. Taken together, Studies 2 and 3 show that the ambiguity of facial expressions and/or the affective incongruence of linguistic context decreased the recognition ability of happy and angry faces. In the fourth chapter we report two EEG-EMG studies (Study 4 and 5) aimed at examining the relationship between facial mimicry and ERPs associated with emotional processing (EPN and N400). The two studies compare the time-course of these ERPs with that of facial mimicry during a fast valence detection task (Study 4) and an explicit emotional recognition task (Study 5), to examine the interplay between cognitive processes and facial mimicry. The facial expressions used in both studies cover four levels of intensity per emotion. Study 4 involves a valence detection task of rapidly exposed emotional facial expressions. The task of Study 5 measured instead the participant's ability to recognise discrete emotional expressions. Findings from both studies are in line with the hypothesis that N400 is sensitive to the augmented demand of an emotion recognition task. The studies' findings suggest that internal simulation occurs especially in case of increased task demand and develops through a complementary cognitive-peripheral process where mimicry responds selectively in respect to central activity.
- Book Chapter
5
- 10.1017/9781316676202.011
- May 8, 2017
According to a recent survey on social signal processing (Vinciarelli, Pantic, & Bourlard, 2009), next-generation computing needs to implement the essence of social intelligence including the ability to recognize human social signals and social behaviors, such as turn taking, politeness, and disagreement, in order to become more effective and more efficient. Social signals and social behaviors are the expression of one's attitude towards social situation and interplay, and they are manifested through a multiplicity of nonverbal behavioral cues, including facial expressions, body postures and gestures, and vocal outbursts like laughter. Of the many social signals, only face, eye, and posture cues are capable of informing us about all identified social behaviors. During social interaction, it is a social norm that one looks their dyadic partner in the eyes, clearly focusing one's vision on the face. Facial expressions thus make for very powerful social signals. As one of the most comprehensive and objective ways to describe facial expressions, the facial action coding system (FACS) has recently received significant attention. Automating FACS coding would greatly benefit social signal processing, opening up new avenues to understanding how we communicate through facial expressions. In this chapter we provide a comprehensive overview of research into machine analysis of facial actions. We systematically review all components of such systems: pre-processing, feature extraction, and machine coding of facial actions. In addition, the existing FACS-coded facial expression databases are summarized. Finally, challenges that have to be addressed to make automatic facial action analysis applicable in real-life situations are extensively discussed. Introduction Scientific work on facial expressions can be traced back to at least 1872 when Charles Darwin published The Expression of the Emotions in Man and Animals (1872). He explored the importance of facial expressions for communication and described variations in facial expressions of emotions. Today, it is widely acknowledged that facial expressions serve as the primary nonverbal social signal for human beings, and are responsible for a large part to regulate interactions with each other (Ekman & Ronsenberg, 2005). They communicate emotions, clarify and emphasize what is being said, and signal comprehension, disagreement, and intentions (Pantic, 2009).
- Research Article
9
- 10.1007/s00482-016-0105-x
- Apr 8, 2016
- Schmerz (Berlin, Germany)
The monitoring of facial expressions to assess pain intensity provides away to determine the need for pain medication in patients who are not able to do so verbally. In this study two methods for facial expression analysis - Facial Action Coding System (FACS) and electromyography (EMG) of the zygomaticus muscle and corrugator supercilii - were compared to verify the possibility of using EMG for pain monitoring. Eighty-seven subjects received painful heat stimuli via athermode on the right forearm in two identical experimental sequences - with and without EMG recording. With FACS, pain threshold and pain tolerance could be distinguished reliably. Multiple regression analyses indicated that some facial expressions had apredictive value. Correlations between FACS and pain intensity and EMG and pain intensity were high, indicating acloser relationship for EMG and increasing pain intensity. For EMG and FACS, alow correlation was observed, whereas EMG correlates much better with pain intensity. Results show that the facial expression analysis based on FACS represents acredible method to detect pain. Because of the expenditure of time and personal costs, FACS cannot be used properly until automatic systems work accurately. The use of EMG seems to be helpful in the meantime to enable continuous pain monitoring for patients with acute post-operative pain.
- Research Article
35
- 10.1371/journal.pone.0178972
- Jun 2, 2017
- PLOS ONE
BackgroundProblems with social-emotional processing are known to be an important contributor to the development and maintenance of eating disorders (EDs). Diminished facial communication of emotion has been frequently reported in individuals with anorexia nervosa (AN). Less is known about facial expressivity in bulimia nervosa (BN) and in people who have recovered from AN (RecAN). This study aimed to pilot the use of computerised facial expression analysis software to investigate emotion expression across the ED spectrum and recovery in a large sample of participants.Method297 participants with AN, BN, RecAN, and healthy controls were recruited. Participants watched film clips designed to elicit happy or sad emotions, and facial expressions were then analysed using FaceReader.ResultsThe finding mirrored those from previous work showing that healthy control and RecAN participants expressed significantly more positive emotions during the positive clip compared to the AN group. There were no differences in emotion expression during the sad film clip.DiscussionThese findings support the use of computerised methods to analyse emotion expression in EDs. The findings also demonstrate that reduced positive emotion expression is likely to be associated with the acute stage of AN illness, with individuals with BN showing an intermediate profile.
- Research Article
- 10.32342/3041-217x-2025-1-29-12
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-1
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-21
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-2
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-22
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-8
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-23
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-18
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-13
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Research Article
- 10.32342/3041-217x-2025-1-29-9
- Jun 2, 2025
- Alfred Nobel University Journal of Philology
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.