Abstract

The paper reports on two empirical studies that provide the first examination into how the presentation of an AI teammate’s identity, responsibility, and capability impacts humans’ perception surrounding AI teammate adoption before interacting as teammates. Study 1’s results indicated that AI teammates are accepted when they share equal responsibility on a task with humans, but other perceptions such as job security generally decline the more responsibility AI teammates have. Study 1 also revealed that identifying an AI as a tool instead of a teammate can have small benefits to human perceptions of job security and adoption. Study 2 revealed that the negative impacts of increasing responsibility can be mitigated by presenting AI teammates’ capabilities as being endorsed by coworkers and one’s own past experience. This paper discusses how to use these results to best balance the presentation of AI teammates’ capabilities and responsibilities, as well as identifying AI as teammates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call