The translator and the pea
This chapter explores the question of how emotions and objects circulate and are transformed in translation. It argues that translation provides an interesting site to think about both the emotions as well as the changing emotional ways we engage with objects. More particularly, it examines how the translation of nuances, seemingly little things – facial expressions, small gestures or tiny material objects like peas – may offer us a glimpse into how emotions and objects get handled, used and handed on in translation. After a short introduction on emotion, language and translation, Section 2.2, God spoke to Cain: Why this tantrum? Why the sulking?, focuses on the translation of biblical 'anger' and looks at the trajectory of Cain and Abel's story (Genesis 4) in English translations. Section 2.3, The prince and the pea, suggests that objects may be transformed and acquire new emotional dimensions in translation. Section 2.4, On emotional objects, considers how translation engages with material things and discusses two case studies: Joseph's tunic (Genesis 37) and the process of its fetishization in translation; and 'ambula divina' in Machado de Assis's Brazilian novel The Posthumous Memoirs of Brás Cubas (1881) and its five English translations. Finally, Section 2.5, The Empire of 'Saudades', comments on the ultimate Portuguese untranslatable emotion word, 'saudades', and reflects on some of the functions of its proclaimed 'untranslatability'.
- Research Article
3
- 10.1080/13854046.2017.1418024
- Dec 21, 2017
- The Clinical Neuropsychologist
Objective: Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. Methods: In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite’s operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. Results: There were no statistically significant differences between the patient’s and the control group’s performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient’s performance was compared to the control group’s, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient’s and the control group’s correct answers for emotional facial stimuli. Conclusions: Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.
- Conference Article
16
- 10.1109/acii.2015.7344592
- Sep 1, 2015
This study presents an experiment highlighting how participants combine facial expressions and haptic feedback to perceive emotions when interacting with an expressive humanoid robot. Participants were asked to interact with the humanoid robot through a handshake behavior while looking at its facial expressions. Experimental data were examined within the information integration theory framework. Results revealed that participants combined Facial and Haptic cues additively to evaluate the Valence, Arousal, and Dominance dimensions. The relative importance of each modality was different across the emotional dimensions. Participants gave more importance to facial expressions when evaluating Valence. They gave more importance to haptic feedback when evaluating Arousal and Dominance.
- Book Chapter
8
- 10.1007/978-3-540-72586-2_11
- Jan 1, 2007
This paper presents a new approach method to recognize facial expressions in various internal states using manifold learning (ML). The manifold learning of facial expressions reflects the local features of facial deformations such as concavities and protrusions. We developed a representation of facial expression images based on manifold learning for feature extraction of facial expressions. First, we propose a zero-phase whitening step for illumination-invariant images. Second, facial expression representation from locally linear embedding (LLE) was developed. Finally, classification of facial expressions in emotion dimensions was generated on two dimensional structure of emotion with pleasure/displeasure dimension and arousal/sleep dimension. The proposed system maps facial expressions in various internal states into the embedding space described by LLE. We explore locally linear embedding space as a facial expression space in continuous dimension of emotion.KeywordsFacial ExpressionInternal StateFace ImageEmotion WordFacial Expression RecognitionThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Conference Article
2
- 10.1109/icassp.2011.5947659
- May 1, 2011
A framework for generating facial expressions from emotional states in daily conversation is described. The frame work allows avatars to express the speaker's state not just prototypical emotions. In this paper, the naturalness of generated facial expressions that are presented together with dialogue speech is examined. An experiment to examine the naturalness of facial expressions presented as still images shows that the two avatars' facial expressions are almost as natural as manually-made facial expressions. In an experiment to determine the natural display speed of dynamic facial expressions, significant interactions between display speed and emotion group were found for most emotion dimensions.
- Research Article
74
- 10.1016/j.pnpbp.2009.07.019
- Jul 24, 2009
- Progress in Neuro-Psychopharmacology and Biological Psychiatry
Reduced activation in the mirror neuron system during a virtual social cognition task in euthymic bipolar disorder
- Book Chapter
1
- 10.1007/11559573_137
- Jan 1, 2005
A new approach for recognizing facial expressions in various internal states that is illumination-invariant and without detectable cues such as a neutral expression is proposed. First, we propose a zero-phase whitening step of the images for illumination-invariant. Second, we developed a representation of face images based on principal component analysis(PCA) representation excluded the first 1 principle component as the features for facial expression recognition, regardless of neutral expression. The PCA basis vectors for this data set had reflected well the changes in facial expression. Finally, a neural network model for classification of facial expressions based on dimension model was created. The dimensional model recognizes not only six facial expressions related to six basic emotions (happiness, sadness, surprise, angry, fear, disgust), but also expressions of various internal states. PCA representations excluded the first 1 principle component and neural network model on the two-dimensional structure of emotion have improved the limitation of expression recognition based on a small number of discrete categories of emotional expressions, and have overcome the problems of lighting sensitivity and dependence on cues such as a neutral expression.
- Research Article
4
- 10.1016/j.lmot.2023.101938
- Oct 11, 2023
- Learning and Motivation
Cultural psychology of english translation through computer vision-based robotic interpretation
- Research Article
54
- 10.1080/08351813.2015.1058607
- Jul 3, 2015
- Research on Language and Social Interaction
This article examines how speakers and hearers collaborate to modify their shared emotional stances in mundane dyadic conversations. Our purpose is to determine how the recipient’s facial expression of emotion during or immediately following the speaker’s utterance contributes to the talk. Such facial expressions do not simply mirror the speaker’s stance or display understanding of the speaker’s talk; rather, they perform systematic operations on the projected course of the talk. Moreover, these facial displays of stance are well-timed and coordinated reactions that (in our sample) lead the way to a more light-hearted mode of discussion. Facial expressions that modify the shared emotional stance can: (a) reenact a past, previously shared emotional stance; (b) evoke a new, emotionally appropriate response to the talk; (c) establish a stance that is withheld and/or ambiguous in the talk; or (d) offer an alternative emotion to frame the talk. The data are in Finnish with English translation.
- Research Article
- 10.1353/shb.2020.0029
- Jan 1, 2020
- Shakespeare Bulletin
Reviewed by: West Side Story Justin B. Hopkins West Side StoryPresented at the Broadway Theatre, New York City. From 02 20, 2020. Directed by Ivo van Hove. Choreography by Anne Teresa De Keersmaeker. Set and light design by Jan Versweyveld. Costumes by An D'Huys. Sound by Tom Gibbons. Video by Luke Halls. Orchestrations by Jonathan Tunick. Musical direction and supervision by Alexander Gemignani. With Yesenia Ayala (Anita), Kevin Csolak (A-Rab), Zuri Noelle Ford (Anybodys), Jacob Guzman (Chino), Matthew Johnson (Baby John), Dharon E. Jones (Riff), Daniel Oreskes (Doc), Pippa Pearthree (Glad Hand), Shereen Pimentel (Maria), Isaac Powell (Tony), Amar Ramasar (Bernardo), Thomas Jay Ryan (Lt. Schrank), Ahmad Simmons (Diesel), Danny Wolohan (Officer Krupke), and others. [End Page 282] Back on Broadway, director Ivo van Hove, renowned for his radical re-interpretations of classic texts, has turned his bold eye on the beloved musical West Side Story. van Hove updated the setting to the current day, cut enough of the script to keep the performance under two hours (without intermission, when I attended in previews), and framed the entire production with extensive film projection. The remarkable design and some strong acting provided many striking moments. However, some elements proved disappointing: occasionally excessive video and blocking that badly interfered with the singing. While directing undeniably innovative theater, van Hove seemed to forget to focus on one fundamental aspect of this form: the music. Especially given this musical's source material, I was reminded of productions of Shakespeare's plays that, while stunningly staged, ignored the poetry. The creativity of this production was impressive. Keeping the stage almost entirely bare, van Hove, along with his longtime collaborator in set and lighting design, Jan Versweyveld, and video designer Luke Halls, fashioned spectacle out of film, some pre-recorded, some captured and projected live. From the first slow pan of the heads and torsos of the Jets and Sharks—standing in a line, snapping their fingers, and scowling (and sporting some impressive ink, courtesy of makeup and tattoo designer Andrew Sotomayor)—the camera was an integral part of the storytelling. Frequently, the audience could see mobile camera operators navigating the action on stage, as they did during the dance at the gym, weaving in and out of the dancers while the dynamic images flashed on the scrim behind. Sometimes the camera was unseen and static. At one point during the dance, the projected view switched to an overhead and downward shot with Maria in the middle, bodies swirling around her. Maybe the most effective camera use was when the audience viewed via screen the scenes played in Doc's drugstore and the bridal shop. Both locations were sets built behindthe stage's back wall, in remarkably thorough and realistic detail. However, physically, the audience could see only partially through the back doors what was going on within. As the actors came and went from these sets, our views shifted between stage and screen, between live people and projected images. Our first encounters [End Page 283]with Tony and Maria were off-stage, as it were, and that was also where they played their pivotal love scene, "One hand, one heart." The camera allowed us to see the actors' facial expressions and small gestures up close, increasing intimacy, despite the audience's actual distance. Even more moving—appropriately appalling, actually—was how the video projection contributed to van Hove's aggressive, grim staging of the scene in which Anita is assaulted by the Jets in Doc's store. Then the camera acted as a cold device of surveillance, documenting in grainy CCTV-like footage the criminal act of A-Rab raping Anita. Much less effective were the background montages showing prerecorded film of locations like a New York street, or beaches in Puerto Rico, intercut with politically charged images like US President Trump's infamous border wall. In the absence of other scenery, these videos provided some illustration of place, perhaps, but mostly they felt unnecessary and awkward. For example, the contrast of the view of gorgeous resorts with footage of hurricane destruction came across as gratuitous—a forced attempt to demonstrate the ongoing relevance of the musical. At worst, this felt...
- Research Article
- 10.54254/2753-7064/16/20230639
- Nov 28, 2023
- Communications in Humanities Research
News, as an important transmission medium, can accurately and rapidly transmit the latest events in the world to the audience timely and accurately, while its most important function is to disseminate information and convey opinions. Through the textual characteristics of news and the specific characteristics of Chinese news in English translation, by listing a large number of news and news translation cases, this paper will discuss the characteristics of news translation, as well as the commonly used translation strategies and the specific translation methods of Chinese news translation. It also analyses the feasibility of combining the domesticating and foreignizing strategies and translation methods in previous news translations according to the cases. It is expected that by using case studies of Chinese news translation and publicity in English to find out how to get the optimal Chinese news translation in English, in order guide better Chinese news translation in English in the future.
- Research Article
- 10.56799/peshum.v2i1.1129
- Dec 1, 2022
- PESHUM : Jurnal Pendidikan, Sosial dan Humaniora
The research was due to several problems found in the field: the students less vocabulary in English. It was indicated when I tried asked some students to translate about some vocabulary, some of students knew the the meaning of those vocabulary but much of the students did not knew what the meaning is. The research is aimed to find out students perception on vocabulary translation in Learning English at the second grade students of SMKN 4 Payakumbuh.The research used descriptive quantitative research by using questionnaire as an instrument. The result of the research: first students perception on word to word method with 26,7% in rating quality is almost of half with the mean of the respondent’s answer moderate option. Second, students perception on free translation method with 30,2% in rating quality is almost of half with the mean of the respondent’s answer moderate option. students perception on teacher’s facial expression with 36,4% in rating quality of good. Third, students perception on literal translation method with 25,02% in rating quality is fraction with the mean of the respondent’s answer is hard option. It can be concluded that almost of half students have moderate opinion about all indicator of translate method on vocabulary translation in Learning English at the seventh grade students of SMKN 4 Payakumbuh in Academic Year 2020/2021.
- Abstract
- 10.1002/alz70857_099702
- Dec 1, 2025
- Alzheimer's & Dementia
BackgroundRecognizing and processing emotional situations is a critical aspect of social cognition. However, this ability may be impaired in individuals with Alzheimer's disease (AD), which could affect their capacity to navigate social interactions effectively. Emotional recognition deficits in AD are thought to stem from cognitive decline and neural changes in regions associated with emotional and social processing. This study aimed to investigate differences in the comprehension and processing of emotional situations between healthy older controls (HOC) and individuals with varying degrees of AD severity, specifically those with mild (CDR 1) and moderate (CDR 2) AD.MethodThe study employed a cross‐sectional design to assess emotional processing across groups. A convenience sample of 115 participants was divided based on their CDR scores (0, 1, and 2). Participants were evaluated in three contexts: (1) understanding the emotional situation, (2) naming the emotion congruent with the scenario, and (3) choosing the appropriate facial expression from options corresponding to four emotional states—sadness, surprise, anger, and happiness.ResultThe study revealed that the ability to understand emotional contexts, name associated emotions, and choose appropriate facial expressions was not uniformly impaired but depended on the specific emotion being portrayed. For example, emotions such as sadness and anger posed greater challenges for participants with AD, while happiness was more accurately identified across all groups. Importantly, these abilities were not strongly related, suggesting that the cognitive processes underpinning understanding, naming, and facial selection operate independently and may deteriorate at different rates in AD.ConclusionThe findings highlight a nuanced interaction between emotional processing and cognitive functioning in AD. The ability to interpret emotional situations and associate them with specific facial expressions likely involves multiple domains of knowledge and neural networks, many of which are degraded in AD. This degradation contributes to higher odds of inaccuracies in emotional recognition and processing. These insights underscore the importance of considering both cognitive and emotional dimensions when evaluating social functioning in AD patients and tailoring interventions to preserve emotional and social competencies.
- Conference Article
2
- 10.1109/fg.2018.00059
- May 1, 2018
We present a novel conversational language model that is grounded with information about facial expressions. To our knowledge this is the first in-depth examination of grounding natural language models with facial cues. We train a neural language model that uses automatically detected facial action unit intensity information in images alongside text to generate conversational dialogue. We evaluate our model on a large and very challenging unconstrained real-world dataset from social media (Twitter), featuring 450,000 conversations with associated facial expressions. Systematic linguistic and crowdsourced analyses reveal the properties of our models: The facial expression grounding strengthens the sentiment of the resulting dialogue such that it is consistent with the valence of the facial expressions. Furthermore, the automatically generated conversational responses are rated as equivalent to the human gold-standard responses on relevance and emotion dimensions.
- Research Article
- 10.1007/s11606-010-1630-4
- Jan 22, 2011
- Journal of General Internal Medicine
A Conversation with Memory
- Conference Article
- 10.54941/ahfe1004618
- Jan 1, 2024
The integration of artificial intelligence (AI) and remote communication technology has enabled development and application of AI robots beyond industrial use, with such robots being applied for household use and services. In such applications, the facial expressions of robots are crucial to information exchange between humans and machines. The ability of robots to subtly change their facial expressions and respond sensitively to the emotional states of humans is a key focus in the development of service robots.This study analyses the Kebbi Air robot, exploring the preferences of users across various age groups (young people, middle-aged adults, and older adults) regarding the facial interface design styles applied to service robots (flat vs. realistic design). It also analyses how different design styles affect user recognition of robot facial expressions portraying emotions. This study developed a set of recommendations pertaining to facial expression styles for the Kebbi Air robot. The study comprised 2 phases. In the first phase, 21 older participants from Zuozhen, Tainan, Taiwan, were recruited to participate in a questionnaire survey and interview to enable assessment of their preferences regarding facial interface designs for robots. In the second phase, the survey plus interview format was repeated to compare the age-stratified data collected from four groups of participants stratified by age (i.e., older adults, middle-aged adults, prime adults and young people).The results indicate that regardless of design style, the younger participants were generally more accurate in recognising robot facial expressions than the other participants were. Furthermore, they demonstrated a higher level of emotional recognition for expressions portrayed in the realistic style, and they expressed a greater willingness to interact with robot interfaces. On the basis of this study’s findings, qualitative suggestions were proposed for various age groups; these suggestions encompassed style recommendations for robot facial expressions (e.g., eyebrows, eyes, mouth, and auxiliary symbols). Through its empirical exploration, this study provides valuable insights and recommendations for designing robot-friendly interfaces for multiple age groups.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.