Abstract
Organizational decision-makers often need to make difficult decisions. One popular way today is to improve those decisions by using information and recommendations provided by data-driven algorithms (i.e., AI advisors). Advice is especially important when decisions involve conflicts of interests, such as ethical dilemmas. A defining characteristic of ethical decision-making is that it often involves a thought process of exploring and imagining what would, could, and should happen under alternative conditions (i.e., what-if scenarios). Such imaginative “counterfactual thinking,” however, is not explored by AI advisors - unless they are pre-programmed to do so. Drawing on Fairness Theory, we identify key counterfactual scenarios programmers can incorporate in the code of AI advisors to improve fairness perceptions. We conducted an experimental study to test our predictions, and the results showed that explanations that include counterfactual scenarios were perceived as fairer by recipients. Taken together, we believe that counterfactual modelling will improve ethical decision-making by actively modelling what-if scenarios valued by recipients. We further discuss benefits of counterfactual modelling, such as inspiring decision-makers to engage in counterfactual thinking within their own decision-making process.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Human–Computer Interaction
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.