Abstract

Whether we like it or not, algorithmic decision-making systems (ADMSs) are all around us. These systems assist both public institutions and private organizations in making decisions that exert a significant impact on our lives. The widespread use of artificial intelligence (AI) and machine learning (ML) systems and the potential risks of using them are the subjects of intensive, ongoing research. It is imperative to ensure their fairness and transparency. The understanding that ADMSs should be subject to human supervision and examined for laypeople’s perceived fairness is clear. Laypeople’s perceptions regarding ADMSs’ fairness, their understanding of the reasons underlying the systems’ outcome (decision), and their comprehension of the linkage between the explanations and the results, influence their willingness to trust the systems, use them and accept their decisions. To determine and better understand which factors affect laypeople’s perceptions of the fairness of algorithmic decisions, we conducted an online between-subject experiment, employing a case study of a simulated AI-based recruitment decision-support system. We focused on three aspects: system characteristics (SC), personality characteristics (PC), and demographic characteristics (DC). We conducted an in-depth analysis to determine which explanation increases the perceived fairness the most. Based on the results, we suggest a framework for predicting a layperson’s perception of the fairness of the explanations. Our findings may help in understanding how to involve humans in the development and evaluation process of ADMSs, how to create personalized explanations based on the SC as well as on users’ PC and DC, and, consequently, how to enhance laypeople’s fairness perceptions regarding ADMSs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call