Abstract

ABSTRACT With the widespread use of artificial intelligence and automated decision-making (ADM), concerns are increasing about automated decisions biased against certain social groups, such as women and racial minorities. The public's skepticism and the danger of algorithmic discrimination are widely acknowledged, yet the role of key factors constituting the context of discriminatory situations is underexplored. This study examined people’s perceptions of gender bias in ADM, focusing on three factors influencing the responses to discriminatory automated decisions: the target of discrimination (subject vs. other), the gender identity of the subject, and situational contexts that engender biases. Based on a randomised experiment (N = 602), we found stronger negative reactions to automated decisions that discriminate against the gender group of the subject than those discriminating against other gender groups, evidenced by lower perceived fairness and trust in ADM, and greater negative emotion and tendency to question the outcome. The negative reactions were more pronounced among participants in underserved gender groups than men. Also, participants were more sensitive to biases in economic and occupational contexts than in other situations. These findings suggest that perceptions of algorithmic biases should be understood in relation to the public's lived experience of inequality and injustice in society.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call