Abstract

Advances in artificial intelligence contribute to increasing automation of decisions. In a healthcare-scheduling context, this study compares effects of decision agents and explanations for decisions on decision-recipients’ perceptions of justice. In a 2 (decision agent: automated vs. human) × 3 (explanation: no explanation vs. equality-explanation vs. equity-explanation) between-subjects online study, 209 healthcare professionals were asked to put themselves in a situation where their vacation request was denied by either a human or an automated agent. Participants either received no explanation or an explanation based on equality or equity norms. Perceptions of interpersonal justice were stronger for the human agent. Additionally, participants perceived human agents as offering more voice and automated agents as being more consistent in decision-making. When given no explanation, perceptions of informational justice were impaired only for the human decision agent. In the study's second part, participants took the perspective of a decision-maker and were given the choice to delegate decision-making to an automated system. Participants who delegated an unpleasant decision to the system frequently externalized responsibility and showed different response patterns when confronted by a decision-recipient who asked for a rationale for the decision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call