Abstract

Robots increasingly act as our social counterparts in domains such as healthcare and retail. For these human-robot interactions (HRI) to be effective, a question arises on whether we trust robots the same way we trust humans. We investigated whether the determinants competence and warmth, known to influence interpersonal trust development, influence trust development in HRI, and what role anthropomorphism plays in this interrelation. In two online studies with 2 × 2 between-subjects design, we investigated the role of robot competence (Study 1) and robot warmth (Study 2) in trust development in HRI. Each study explored the role of robot anthropomorphism in the respective interrelation. Videos showing an HRI were used for manipulations of robot competence (through varying gameplay competence) and robot anthropomorphism (through verbal and non-verbal design cues and the robot's presentation within the study introduction) in Study 1 (n = 155) as well as robot warmth (through varying compatibility of intentions with the human player) and robot anthropomorphism (same as Study 1) in Study 2 (n = 157). Results show a positive effect of robot competence (Study 1) and robot warmth (Study 2) on trust development in robots regarding anticipated trust and attributed trustworthiness. Subjective perceptions of competence (Study 1) and warmth (Study 2) mediated the interrelations in question. Considering applied manipulations, robot anthropomorphism neither moderated interrelations of robot competence and trust (Study 1) nor robot warmth and trust (Study 2). Considering subjective perceptions, perceived anthropomorphism moderated the effect of perceived competence (Study 1) and perceived warmth (Study 2) on trust on an attributional level. Overall results support the importance of robot competence and warmth for trust development in HRI and imply transferability regarding determinants of trust development in interpersonal interaction to HRI. Results indicate a possible role of perceived anthropomorphism in these interrelations and support a combined consideration of these variables in future studies. Insights deepen the understanding of key variables and their interaction in trust dynamics in HRI and suggest possibly relevant design factors to enable appropriate trust levels and a resulting desirable HRI. Methodological and conceptual limitations underline benefits of a rather robot-specific approach for future research.

Highlights

  • Besides social interaction with other humans, we are increasingly confronted with innovative, intelligent technologies as our social counterparts

  • It seems worthwhile to look into theoretical foundations of trust development in interpersonal interaction, especially since trust builds a basic precondition for effective human-robot interaction (HRI) (Hancock et al, 2011; van Pinxteren et al, 2019), and research in different contexts revealed a particular skepticism of machines compared to humans in trustworthiness (Dietvorst et al, 2015) and related variables such as cooperation (Merritt and McGee, 2012; Ishowo-Oloko et al, 2019), relevant in consequential fields of application, such as medicine and healthcare (Promberger and Baron, 2006; Ratanawongsa et al, 2016)

  • Research agrees on the importance of trust for effective HRI (e.g., Freedy et al, 2007; Hancock et al, 2011; van Pinxteren et al, 2019), robot-related determinants of trust development in HRI have barely been considered or systematically explored

Read more

Summary

Introduction

Besides social interaction with other humans, we are increasingly confronted with innovative, intelligent technologies as our social counterparts. Social robots, which are designed to interact and communicate with humans (Bartneck and Forlizzi, 2004), represent a popular example of such They become more and more present within our everyday lives, e.g., in the field of healthcare (e.g., Beasley, 2012), and in retail and transportation, and support us in daily tasks, like shopping or ticket purchase. Oftentimes their interaction design does not even allow a clear distinction from human counterparts, e.g., when they appear in the form of chatbots. It seems worthwhile to look into theoretical foundations of trust development in interpersonal interaction, especially since trust builds a basic precondition for effective HRI (Hancock et al, 2011; van Pinxteren et al, 2019), and research in different contexts revealed a particular skepticism of machines compared to humans in trustworthiness (Dietvorst et al, 2015) and related variables such as cooperation (Merritt and McGee, 2012; Ishowo-Oloko et al, 2019), relevant in consequential fields of application, such as medicine and healthcare (Promberger and Baron, 2006; Ratanawongsa et al, 2016)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call