Abstract

Purpose The purpose of this study was to investigate trust within human-AI teams. Trust is an essential mechanism for team success and effective human-AI collaboration. Design/methodology/approach In an online experiment, the authors investigated whether trust perceptions and behaviours are different when introducing a new AI teammate than when introducing a new human teammate. A between-subjects design was used. A total of 127 subjects were presented with a hypothetical team scenario and randomly assigned to one of two conditions: new AI or new human teammate. Findings As expected, perceived trustworthiness of the new team member and affective interpersonal trust were lower for an AI teammate than for a human teammate. No differences were found in cognitive interpersonal trust and trust behaviours. The findings suggest that humans can rationally trust an AI teammate when its competence and reliability are presumed, but the emotional aspect seems to be more difficult to develop. Originality/value This study contributes to human–AI teamwork research by connecting trust research in human-only teams with trust insights in human–AI collaborations through an integration of the existing literature on teamwork and on trust in intelligent technologies with the first empirical findings on trust towards AI teammates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call