<p id="p00005">Inequality is the biggest challenge for global social and economic development, which has the potential to impede the goal of global sustainable development. One way to reduce such inequality is to use artificial intelligence (AI) for decision-making. However, recent research has found that while AI is more accurate and is not influenced by personal bias, people are generally averse to AI decision-making and perceive it as being less fair. Given the theoretical and practical importance of fairness perceptions of AI decision-making, a growing number of researchers have recently begun investigating how individuals form fairness perceptions in regard to AI decision-making. However, existing research is generally quite scattered and disorganized, which has limited researchers’ and practitioners’ understanding of fairness perceptions of AI decision-making from a conceptual and systematic perspective. Thus, this review first divided the relevant research into two categories based on the type of decision makers. The first category is fairness perception research in which AI is the decision-maker. Drawn upon moral foundations theory, fairness heuristic theory, and fairness theory, these studies explain how AI characteristics (i.e., transparency, controllability, rule, and appropriateness) and individual characteristics (demographics, personalities, and values) affect individuals’ fairness perceptions. Existing research revealed that there were three main underlying cognitive mechanisms underlying the relationship between AI or individual characteristics and their fairness perceptions of AI decision-making: (a) individual characteristics and AI appropriateness affect individuals’ fairness perceptions via their moral intuition; (b) AI transparency affects individuals’ fairness perceptions via their perceived understandability; and (c) AI controllability affects individuals’ fairness perceptions via individuals’ needs fulfillment. The second category is fairness perception research that compares AI and humans as decision-makers. Based on computers are social actors (CASA) hypothesis, the algorithm reductionism perspective, and the machine heuristic model, these studies explained how individuals’ different perceptions of attributes between AI and humans (i.e., mechanistic attributes vs. societal attributes, simplified attributes vs. complex attributes, objective attributes vs. subjective attributes) affect individuals’ fairness perceptions and have revealed some inconsistent research findings. Specifically, some studies found that individuals perceive AI decision makers as being mechanical (i.e., lack of emotion and human touch) and simplified (i.e., decontextualization) than human decision makers, which leads individuals perceive that the decisions made by humans rather than AI are fairer. However, other studies found that compared to human decision makers, individuals regard AI decision makers as being more objective (i.e., consistent, neutral, and free of responsibility) than human decision makers, which leads individuals perceive that the decisions made by AI rather than human are fairer. Also, a small number of studies found that there is no significant difference in individuals’ fairness perceptions between AI decision makers and human decision makers. Such mixed findings reveal that individuals’ fairness perceptions of decision-making may be dependent on the specifical attributes of AI that individuals perceived in different contexts. Based on this systematic review, we proposed five promising directions for future research to help expand fairness perception literature in the context of AI decision-making. That is, (a) exploring the affective mechanisms underlying the relationship between AI or individual characteristics and their fairness perceptions of AI decision-making; (b) exploring the antecedents of interactional fairness perceptions of AI decision-making; (c) exploring fairness perceptions when robotic AI is the decision maker; (d) clarifying the boundary conditions when AI decision-making is considered to be fairer than human decision-making, versus when human decision-making is considered to be fairer than AI decision-making; and (e) exploring fairness perceptions when AI and humans make decisions jointly. We hope this review contribute to the understanding of individuals' fairness perceptions of AI decision-making theoretically and practically.
Read full abstract