Abstract

Organizational decision-makers often need to make difficult decisions. One popular way today is to improve those decisions by using information and recommendations provided by data-driven algorithms (i.e., AI advisors). Advice is especially important when decisions involve conflicts of interests, such as ethical dilemmas. A defining characteristic of ethical decision-making is that it often involves a thought process of exploring and imagining what would, could, and should happen under alternative conditions (i.e., what-if scenarios). Such imaginative “counterfactual thinking,” however, is not explored by AI advisors - unless they are pre-programmed to do so. Drawing on Fairness Theory, we identify key counterfactual scenarios programmers can incorporate in the code of AI advisors to improve fairness perceptions. We conducted an experimental study to test our predictions, and the results showed that explanations that include counterfactual scenarios were perceived as fairer by recipients. Taken together, we believe that counterfactual modelling will improve ethical decision-making by actively modelling what-if scenarios valued by recipients. We further discuss benefits of counterfactual modelling, such as inspiring decision-makers to engage in counterfactual thinking within their own decision-making process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call