Abstract

Trust is a crucial guide in interpersonal interactions, helping people to navigate through social decision-making problems and cooperate with others. In human--computer interaction (HCI), trustworthy computer agents foster appropriate trust by supporting a match between their perceived and actual characteristics. As computers are increasingly endowed with capabilities for cooperation and intelligent problem-solving, it is critical to ask under which conditions people discern and distinguish trustworthy from untrustworthy technology. We present an interactive cooperation game framework allowing us to capture human social attributions that indicate trust in continued and interdependent human--agent cooperation. Within this framework, we experimentally examine the impact of two key dimensions of social cognition, warmth and competence, as antecedents of behavioral trust and self-reported trustworthiness attributions of intelligent computers. Our findings suggest that, first, people infer warmth attributions from unselfish vs. selfish behavior and competence attributions from competent vs. incompetent problem-solving. Second, warmth statistically mediates the relation between unselfishness and behavioral trust as well as between unselfishness and perceived trustworthiness. We discuss the possible role of human social cognition for human--computer trust.

Highlights

  • Computer agents are increasingly capable of intelligent human–agent problem-solving (Clarke and Smyth, 1993; Nass et al, 1996; Dautenhahn, 1998; Hoc, 2000)

  • We report a study conducted within a human–agent cooperation paradigm, a 2-player puzzle game

  • Using a prototyping platform for multimodal user interfaces (Mattar et al, 2015), we developed a cooperation game paradigm to permit the manipulation of warmth and competence in human–computer interaction

Read more

Summary

Introduction

Computer agents are increasingly capable of intelligent human–agent problem-solving (Clarke and Smyth, 1993; Nass et al, 1996; Dautenhahn, 1998; Hoc, 2000). The social and affective foundations of strategic decision-making between humans and agents are gaining more interest by researchers. Strategic interaction builds on the perceived intent of the computerized counterpart, coordinated joint actions, and fairness judgments (Lin and Kraus, 2010; de Melo et al, 2016; Gratch et al, 2016). Since computer agents are treated as social actors and are perceived in ways similar to how we perceive other humans (Nass et al, 1994; Nass and Moon, 2000), understanding and shaping the interactions with such agents is becoming more and more important. In this paper we ask if fundamental components of human social cognition, warmth and competence attributions, impact trust in HCI. A Social Cognition Perspective on Human–Computer Trust

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.