Abstract

As nonhuman agents are integrated into the workforce, the question becomes to what extent advice seeking in technology-infused environments depends on the perceived fit between agent and task and whether humans are willing to consider advice from nonhuman agents. In this experiment, participants sought advice from human, robot, or computer agents when performing a social or analytical task, with the task being either known or unknown when selecting an agent. In the agent-1st condition, participants 1st chose an adviser and then got their task assignment; in the task-1st condition, participants 1st received the task assignment and then chose an adviser. In the agent-1st condition, we expected participants to prefer human to nonhuman advisers and to subsequently comply more with their advice when they were assigned the social as opposed to the analytical task. In the task-1st condition, we expected advice seeking and compliance to be guided by stereotypical assumptions regarding an agent's task expertise. The findings indicate that the human was chosen more often than were the nonhuman agents in the agent-1st condition, whereas adviser choices were calibrated based on perceived agent-task fit in the task-1st condition. Compliance rates were not generally calibrated based on agent-task fit. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call