Human trust and intention to use intelligent agents: the impact of intelligent agents’ ability to understand human intentions and emotions

  • TL;DR
  • Abstract
  • Literature Map
  • Similar Papers
TL;DR

This study found that intelligent agents with enhanced understanding of human intentions and emotions, particularly through perspective-taking, increase user perceptions of intelligence and empathy, leading to higher trust and use intention; improved cognitive abilities boost adoption via perceptions of intelligence and empathy without triggering negative attitudes.

Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Although some research has incorporated human cognition into the design of intelligent agents, few studies examine how agents’ understanding of intentions and emotions affects collaboration. This study employed a scenario-based questionnaire with 109 participants, using a structural equation model to analyse how intelligent agents’ intention and emotion understanding ability (literal vs. perspective-taking) influence trust and use intention. Results showed that intelligent agents with a higher understanding ability increased user perceptions of their intelligence and empathy, which led to greater trust and use intention. Unlike the ‘uncanny valley’ effect, perspective-taking did not increase negative attitudes towards the intelligent agents. The agent’s enhanced understanding ability increased users’ use intention via two routes: by improving perceptions of intelligence and hence increasing use intention, and by enhancing perceptions of empathy and hence improving trust and further boosting use intention. These findings provided practical guidance for designing intelligent agents with improved cognitive capabilities.

Similar Papers
  • Conference Article
  • Cite Count Icon 2
  • 10.1109/iecon.2017.8217498
Two-layer fuzzy kernel regression for human emotional intention understanding
  • Oct 1, 2017
  • Luefeng Chen + 4 more

A two-layer fuzzy kernel regression (TLFKR) model is proposed for understanding human emotional intention in human-robot interaction, where TLFKR model consists of two layers, including fuzzy c-means (FCM) with kernel ridge regression (Kernel 1) for information analysis layer, fuzzy support vector regressions (FSVR) (Kernel 2) for intention understanding layer. TLFKR model represents the weight impact for each emotional information and aims to improve smooth human-robot interaction by endowing robot with human emotional intention understanding capability. Experimental Results show that the proposal obtains an intention understanding accuracy of 65.67%/68.33%/80.67% with the clusters number c=2/3/6 (according to different genders/ages/nationalities), which are 7.34%/7.18%/8.67% and 18.67%/21.33%/33.67% higher than that of TLFSVR and SVR, respectively. Additionally, preliminary application experiments are performed in the developing emotional social robot system, where two mobile robots and volunteers experience a scenario of “drinking at a bar”, and social robots are able to express basic emotions and understand human order intention.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.3389/fpsyg.2024.1280739
Preschoolers’ cognitive flexibility and emotion understanding: a developmental perspective
  • Feb 8, 2024
  • Frontiers in Psychology
  • Li Mengxia

IntroductionCognitive flexibility is the ability to adapt to changing tasks or problems, while emotion understanding is the ability to interpret emotional cues and information in different contexts. Both abilities are crucial for preschoolers’ socialization.MethodsThis study selected 532 preschool children aged 3–6 years from two kindergartens in a central province of China. The Dimensional Change Card Sorting (DCCS) task and emotion understanding tasks were used to investigate the developmental characteristics of cognitive flexibility, emotion understanding abilities, and their relationship.ResultsThe results showed: (1) For cognitive flexibility, children older than 5 years scored significantly higher than younger children, and girls scored higher than boys. (2) For facial emotion recognition: (i) Children’s recognition scores for happy, sad, and angry expressions were significantly higher than fear; children could accurately recognize happy, sad, and angry emotions by age 3, while fear recognition developed rapidly after age 5; (ii) Girls scored higher in recognizing fearful faces than boys. (3) For situational emotion understanding: (i) Children’s development followed the hierarchical order of external, desire, clue, and belief-based understanding. Situational and desire-based understanding already reached high levels by age 3, while clue and belief-based understanding developed quickly after age 5; (ii) Girls scored higher than boys in belief-based emotion understanding. (4) Cognitive flexibility significantly predicted children’s facial emotion recognition, external and desire-based emotion understanding.DiscussionParents and teachers should cultivate children’s cognitive flexibility and provide personalized support. They should also fully grasp the characteristics of children’s emotion understanding development, systematically nurture their emotion understanding abilities, and leverage cognitive flexibility training to improve their emotion understanding.

  • Research Article
  • 10.1002/cav.2049
Editorial Issue 33.2
  • Mar 1, 2022
  • Computer Animation and Virtual Worlds
  • Nadia Magnenat Thalmann + 1 more

This issue contains five papers. In the first paper, Ege Tekgün, Muhtar Çağkan Uludağlı, Hüseyin Akcan, and Burak Erdeniz, from Izmir University of Economics in Turkey assess the influence of virtual avatar anthropomorphism and the synchronicity of the visuo-tactile stimulation on self-location using a virtual reality (VR) full-body illusion (FBI) experiment. During the experiment, half of the 36 participants observed a gender-matched full-body humanoid avatar from a first-person perspective (1PP) and the other half observed a less anthropomorphic full-body cubical avatar from 1PP while they were receiving synchronous and asynchronous visuo-tactile stimulation. Results show a significant main effect of the synchronicity of the visuo-tactile stimulation and avatar body type on self-location but no significant interaction was found between them. Moreover, the results of the self-report questionnaire provide additional evidence showing that participants who received synchronous visuo-tactile stimulation, experienced not only greater changes in the feeling of self-location, but also, increased ownership, and referral of touch. In the second paper, Ke Li, Qian Zhang, Jinyuan Jia, from Tongji University in Shanghai, China, and Hantao Zhao, from Southeast University in Nanjing, all in China discuss the technology of presenting building information modeling (BIM) with an online platform and the difficulty to display large-scale BIM scenes in a flawless manner on mobile browsers, due to network bandwidth and browser performance limitations. The authors propose CEBOW, a Cloud-Edge-Browser Online architecture for visualizing BIM components with online solutions. The method combines transmission scheduling, cache management, and optimal initial loading into a single system architecture. For network transmission testing, BIM scenes are used, and the results show that their method effectively reduces scene loading time and networking delay while improving the visualization effect of large-scale scenes. In the third paper, Yuzhu Dong and Eakta Jain, from University of Florida in Gainesville, and Sophie Jörg, from Clemson University, all in United States discuss how the importance of eyes for virtual characters stems from the intrinsic social cues. They emphasize that the eye animation impacts the perception of an avatar's internal emotional state. They present three large scale experiments that investigate the extent to which viewers can identify if an avatar is scared. The authors find that participants can identify a scared avatar with 75% accuracy using cues in the eyes including pupil size variation, gaze, and blinks. Because eye trackers return pupil diameter in addition to gaze, their experiments inform practitioners that animating the pupil correctly will add expressiveness to a virtual avatar with negligible additional cost. These findings also have implications for creating expressive eyes in intelligent conversational agents and social robots. The fourth paper, by Osman Güler, from TUSAŞ Şehit Hakan Gülşen Vocational and Technical Anatolian High School in Ankara and Serkan Savaş, from Çankırı Karatekin Üniversitesi, both in Turkey present a study showing that Interactive Boards (IBs) have the necessary hardware to run Stereoscopic 3D (S3D) training materials, but the panel has not got an S3D imaging feature. Therefore, only the Anaglyph S3D imaging method can be applied to IBs. Thus, an Anaglyph S3D training material was prepared for the interaction of the skeletal system and interactive 3D material design for IBs with its effects in education was investigated. A Likert-type scale was developed to measure the usability of the training material on IBs and the material was evaluated by 20 experts. The data were analyzed by the SPSS statistical program and the results were interpreted. According to the results, educational material seems to be positive in terms of image characteristics, content, navigation, and ease of use, font sizes were moderate for readability, the feedback process and the help menu were moderately effective. The last paper by Alexandra Sierra, Marie Postma, from Tilburg University in Netherlands and Menno Van Zaanen, from North-West University in Potchefstroom, South Africa investigate whether the uncanny valley effect, which has already been found for the human-like appearance of virtual characters, can also be found for animal-like appearances. They conducted an online study in which six different animal designs were evaluated in terms of the following properties: familiarity, commonality, naturalness, attractiveness, interestingness, and animateness. The study participants differed in age (under 10–60 years) and origin (Europe, Asia, North America, and South America). For the evaluation of the results, the authors ranked the animal-likeness of the character using both expert opinion and participant judgments. They also investigated the effect of movement and morbidity. The results confirm the existence of the uncanny valley effect for virtual animals, especially with respect to familiarity and commonality, for both still and moving images. No uncanny valley effect was detected for interestingness and animateness.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/arso51874.2021.9542847
Intelligent Agent Deception and the Influence on Human Trust and Interaction
  • Jul 8, 2021
  • Kantwon Rogers + 1 more

As robots and intelligent agents are given more complex cognitive capabilities, it is only appropriate to assume that they will be able to commit acts of deceit much more readily. And yet, not much attention has been given to investigating the effects that robot deception has on human interaction and their trust in the agent once the deception has been recognized. This paper examines how embodiment influences a person's trust of an intelligent agent that exhibits either deceptive or honest behavior that is either helpful or harmful in a financial scenario. Our results suggest that deceptive behavior decreases human trust no matter if the embodiment is physical or virtual and deceptive behavior decreases trust regardless of if the deception benefits the human. Moreover, it was found that trust levels slightly influence a person's punishment or reward strategies in addition to their desire to reuse the intelligent agent in the future. Although exposure to deception causes negative effects, the majority of participants still found deception permissible when it benefited them. Additionally, physically embodied robots were shown to mitigate the negative aftereffects of deception more than those that were virtually embodied. These results suggest that embodiment choice can have meaningful effects on the permissibility of deception conducted by intelligent agents.

  • Research Article
  • Cite Count Icon 39
  • 10.1109/tfuzz.2020.2966167
A Fuzzy Deep Neural Network with Sparse Autoencoder for Emotional Intention Understanding in Human-Robot Interaction
  • Jan 1, 2020
  • IEEE Transactions on Fuzzy Systems
  • Luefeng Chen + 4 more

A fuzzy deep neural network with sparse autoencoder (FDNNSA) is proposed for intention understanding based on human emotions and identification information (i.e., age, gender, and region), in which the fuzzy C-means (FCM) is used to cluster the input data, and deep neural network with sparse autoencoder (DNNSA) is designed for emotional intention understanding in human-robot interaction. It aims to make robots capable of recognizing human emotions and understanding related emotional intention, the FCM is suitable for gathering similar information so that the calculations of dimensionality of DNNSA will be reduced, and the sparse autoencoder of DNNSA can make the neuron of DNNSA sparse to reduce the complexity of the network in such a way human-robot interaction is running smoothly. To validate the proposal, simulation experiments based on benchmark databases such as facial expression database of CK+, and speech emotion corpus of CASIA were completed. The experimental results show that the proposal outperforms the baseline algorithms of Softmax regression (SR), DNNSA, FCM-based SR (FSR), Softplus, Gath Geva-based DNNSA (GDNNSA), and ensemble DNNSA (EDNNSA). Preliminary application experiments are performed in the development of emotional social robot system, where volunteers experience the scenario of "drinking at the bar". The obtained results indicate that the proposed FDNNSA can promote robot understanding of emotional intention of human.

  • Research Article
  • Cite Count Icon 4
  • 10.1017/prp.2018.11
Relations of Parent-Child Interaction to Chinese Young Children's Emotion Understanding
  • Jan 1, 2018
  • Journal of Pacific Rim Psychology
  • Heyi Zhang

The present study examined the relationship between parent-child interaction and children's emotion understanding ability. The participants were 56 three-year-old children and their mothers from Beijing, China. Mothers and children took part in three dyadic interaction tasks and were video recorded for coding of both mothers’ and children's behaviours. Each child completed three individually administered tests of emotion understanding, including the facial expression recognition task, emotion perspective-taking task, and emotion reason understanding task. Results demonstrated that both mothers’ and children's interaction behaviours were related to children's emotion understanding. Gender differences were found in the relationships between interaction behaviours and children's emotion understanding. Girls’ emotion understanding was associated with children's positive behaviours. In contrast, boys’ emotion understanding was not associated with children's positive behaviours, but related to mothers’ negative behaviours.

  • Conference Article
  • 10.1117/12.438119
<title>Integration of battlefield visualization and agent technology</title>
  • Aug 23, 2001
  • Philip J Emmerman + 2 more

There are several significant and related automation trends in the evolution of the tactical battlefield, necessary to support greatly increased mobility of our land forces. One relates to the increased. automation and distributed functionality of the nerve center or tactical operation center (TOC), with the introduction of intelligent software agents. The anticipated dynamics of the future battlefield will require greatly increased mobility, information flow, information assimilation, and decision action of these centers. The second relates to the digitization of battlefield platforms. This digitization greatly reduces the uncertainty concerning these platforms and enables automated information exchange between these platforms and their TOC. The third is the rapid development of robotic or physical agents for numerous hazardous battlefield visualization to exploit the potential synergy and unification of these disparate developments. Battlefield visualization programs are currently focused on effectively representing the physical environment to support planning, mission rehearsal, and situational awareness. As intelligent agents are developed, battlefield visualization must be enhanced to include the state, behavior, collaboration and results of these agents. An initial representation of software and physical agents within a single battlefield visualization is presented. The major challenges to attaining this level of automation, in particular human interaction and trust will be addressed.

  • PDF Download Icon
  • Supplementary Content
  • Cite Count Icon 18
  • 10.3389/fnhum.2013.00099
Beyond human intentions and emotions
  • Mar 27, 2013
  • Frontiers in Human Neuroscience
  • Elsa Juan + 5 more

Although significant advances have been made in our understanding of the neural basis of action observation and intention understanding in the last few decades by studies demonstrating the involvement of a specific brain network (action observation network; AON), these have been largely based on experimental studies in which people have been considered as strictly isolated entities. However, we, as social species, spend much more of our time performing actions interacting with others. Research shows that a person's position along the continuum of perceived social isolation/bonding to others is associated with a variety of physical and mental health effects. Thus, there is a crucial need to better understand the neural basis of intention understanding performed in interpersonal and emotional contexts. To address this issue, we performed a meta-analysis using of functional magnetic resonance imaging (fMRI) studies over the past decade that examined brain and cortical network processing associated with understanding the intention of others actions vs. those associated with passionate love for others. Both overlapping and distinct cortical and subcortical regions were identified for intention and love, respectively. These findings provide scientists and clinicians with a set of brain regions that can be targeted for future neuroscientific studies on intention understanding, and help develop neurocognitive models of pair-bonding.

  • Conference Article
  • Cite Count Icon 84
  • 10.1145/3319502.3374786
The Persistence of First Impressions
  • Mar 9, 2020
  • Maike Paetzel + 2 more

Numerous studies in social psychology have shown that familiarization across repeated interactions improves people’s perception of the other. If and how these findings relate to human-robot interaction (HRI) is not well understood, even though such knowledge is crucial when pursuing long-term interactions. In our work, we investigate the persistence of first impressions by asking 49 participants to play a geography game with a robot. We measure how their perception of the robot changes over three sessions with three to ten days of zero exposure in between. Our results show that different perceptual dimensions stabilize within different time frames, with the robot’s competence being the fastest to stabilize and perceived threat the most fluctuating over time. We also found evidence that perceptual differences between robots with varying levels of humanlikeness persist across repeated interactions. This study has important implications for HRI design as it sheds new light on the influence of robots’ embodiment and interaction abilities. Moreover, it also impacts HRI theory as it presents novel findings contributing to research on the uncanny valley and robot perception in general. CCS CONCEPTS •Human-centered computing → Empirical studies in HCI; Natural language interfaces; •Computer systems organization →Robotics; •Computing methodologies →Intelligent agents. ACM Reference Format: Maike Paetzel, Giulia Perugia, and Ginevra Castellano. 2020. The Persistence of First Impressions: The Effect of Repeated Interactions on the Perception of a Social Robot. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/ 3319502.3374786

  • Conference Article
  • Cite Count Icon 9
  • 10.1109/ijcnn.2015.7280587
Human intention understanding based on object affordance and action classification
  • Jul 1, 2015
  • Zhibin Yu + 3 more

Intention understanding is a basic requirement for human-machine interaction. Action classification and object affordance recognition are two possible ways to understand human intention. In this study, Multiple Timescale Recurrent Neural Network (MTRNN) is adapted to analyze human action. Supervised MTRNN, which is an extension of Continuous Timescale Recurrent Neural Network (CTRNN), is used for action and intention classification. On the other hand, deep learning algorithms proved to be efficient in understanding complex concepts in complex real world environment. Stacked denoising auto-encoder (SDA) is used to extract human implicit intention related information from the observed objects. A feature based object detection method namely Speeded Up Robust Features (SURF) is also used to find the object information. Object affordance describes the interactions between agent and the environment. In this paper, we propose an intention recognition system using ‘action classification’ and ‘object affordance information’. Experimental result shows that supervised MTRNN is able to use different information in different time period and improve the intention recognition rate by cooperating with the SDA.

  • Research Article
  • Cite Count Icon 66
  • 10.1109/tfuzz.2018.2809691
Three-Layer Weighted Fuzzy Support Vector Regression for Emotional Intention Understanding in Human–Robot Interaction
  • Oct 1, 2018
  • IEEE Transactions on Fuzzy Systems
  • Luefeng Chen + 6 more

A three-layer weighted fuzzy support vector regression (TLWFSVR) model is proposed for understanding human intention, and it is based on the emotion-identification information in human–robot interaction. The TLWFSVR model consists of three layers, including adjusted weighted kernel fuzzy c -means for data clustering, fuzzy support vector regressions (FSVR) for information understanding, and weighted fusion for intention understanding. It aims to guarantee the quick convergence and satisfactory performance of the local FSVR via adjusting the weights of each feature in each cluster, in such a way that importance of different emotion-identification information is represented. Moreover, smooth human-oriented interaction can be obtained by endowing robot with human intention understanding capability. Experimental results show that the proposed TLWFSVR model obtains higher intention understanding accuracy and less computational time than that of two-layer fuzzy support vector regression, support vector regression, and back propagation neural network (BPNN), respectively. Additionally, the preliminary application experiments are performed in the developing human–robot interaction system, called emotional social robot system, where 12 volunteers and 2 mobile robots experience a scenario of “drinking at a bar.” Application results indicate that the bartender robot is able to understand customers’ order intentions.

  • Research Article
  • Cite Count Icon 127
  • 10.1075/is.16.2.01mac
Individual differences predict sensitivity to the uncanny valley
  • Nov 20, 2015
  • Interaction Studies
  • Karl F Macdorman + 1 more

It can be creepy to notice that something human-looking is not real. But can sensitivity to this phenomenon, known as the uncanny valley, be predicted from superficially unrelated traits? Based on results from at least 489 participants, this study examines the relation between nine theoretically motivated trait indices and uncanny valley sensitivity, operationalized as increased eerie ratings and decreased warmth ratings for androids presented in videos. Animal Reminder Sensitivity, Neuroticism, its Anxiety facet, and Religious Fundamentalism significantly predicted uncanny valley sensitivity. In addition, Concern over Mistakes and Personal Distress significantly predicted android eerie ratings but not warmth. The structural equation model indicated that Religious Fundamentalism operates indirectly, through robot-related attitudes, to heighten uncanny valley sensitivity, while Animal Reminder Sensitivity increases eerie ratings directly. These results suggest that the uncanny valley phenomenon may operate through both sociocultural constructions and biological adaptations for threat avoidance, such as the fear and disgust systems. Trait indices that predict uncanny valley sensitivity warrant investigation by experimental methods to explicate the processes underlying the uncanny valley phenomenon.

  • Research Article
  • Cite Count Icon 23
  • 10.1111/j.1467-9507.2007.00391.x
Emotion Understanding in English‐ and Spanish‐speaking Preschoolers Enrolled in Head Start
  • Mar 26, 2007
  • Social Development
  • Andrew Downs + 2 more

Research assessing children's emotion understanding has increased over the past several years. Despite the proliferation of research, there have been few studies conducted examining the development of emotion understanding in children from diverse backgrounds. Further, there has been no research conducted examining the psychometric properties of emotion understanding measures when used with children from diverse backgrounds. A total of 597 preschool children from low‐income families enrolled in Head Start (248 Spanish‐speaking and 349 English‐speaking) were given an emotion understanding assessment in their native language at two sessions separated by six months. All children showed significant growth in emotion understanding abilities from time 1 to time 2, with English‐speaking children generally outperforming Spanish‐speaking children. The psychometric performance of the measure was analyzed for both English and Spanish samples and for English‐speaking children at different levels of language ability.

  • Research Article
  • Cite Count Icon 10
  • 10.1080/07341512.2018.1544344
Cognition and emotions in Japanese humanoid robotics
  • Apr 3, 2018
  • History and Technology
  • Yulia Frumer

ABSTRACTFrom the beginning of the twentieth century, Japanese roboticists have observed specific features in the physical designs of humanoid robots that cause users to react with either fear or affection. Analyzing the sources of these reactions, robotics egineers eliminated from robots those features that might trigger negative associations, and instead embedded their designs with cues to norms, theories, and cultural references valued by their society. By analyzing Nishimura Makoto’s building of an affable artificial human named Gakutensoku, Mori Masahiro’s discovery of the phenomenon of the ‘uncanny valley’, and Ishiguro Hiroshi’s current employment of cognitive, social, and psychological sciences to overcome the ‘uncanny’ impression of his robots, this essay claims that the development of the field of humanoid robotics in Japan was driven by concern with human emotion and cognition, and shaped by Japanese roboticists’ own associations with the social and intellectual environments of their time.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.heliyon.2024.e27977
A stimulus exposure of 50 ms elicits the uncanny valley effect
  • Mar 1, 2024
  • Heliyon
  • Jodie Yam + 2 more

The uncanny valley (UV) effect captures the observation that artificial entities with near-human appearances tend to create feelings of eeriness. Researchers have proposed many hypotheses to explain the UV effect, but the visual processing mechanisms of the UV have yet to be fully understood. In the present study, we examined if the UV effect is as accessible in brief stimulus exposures compared to long stimulus exposures (Experiment 1). Forty-one participants, aged 21–31, rated each human-robot face presented for either a brief (50 ms) or long duration (3 s) in terms of attractiveness, eeriness, and humanness (UV indices) in a 7-point Likert scale. We found that brief and long exposures to stimuli generated a similar UV effect. This suggests that the UV effect is accessible at early visual processing. We then examined the effect of exposure duration on the categorisation of visual stimuli in Experiment 2. Thirty-three participants, aged 21–31, categorised faces as either human or robot in a two-alternative forced choice task. Their response accuracy and variance were recorded. We found that brief stimulus exposures generated significantly higher response variation and errors than the long exposure condition. This indicated that participants were more uncertain in categorising faces in the brief exposure condition due to insufficient time. Further comparisons between Experiment 1 and 2 revealed that the eeriest faces were not the hardest to categorise. Overall, these findings indicate (1) that both the UV effect and categorical uncertainty can be elicited through brief stimulus exposure, but (2) that categorical uncertainty is unlikely to cause the UV effect. These findings provide insights towards the perception of robotic faces and implications for the design of robots, androids, avatars, and artificial intelligence agents.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.