Abstract

As a specific type of algorithmic discrimination, algorithmic proxy discrimination (APD) exerts disparate impacts on legally protected groups because machine learning algorithms adopt facially neutral proxies to refer to legally protected features through their operational logic. Based on the relationship between sensitive feature data and the outcome of interest, APD can be classified as direct or indirect conductive. In the context of big data, the abundance and complexity of algorithmic proxy relations render APD inescapable and difficult to discern, while opaque algorithmic proxy relations impede the imputation of APD. Thus, as traditional antidiscrimination law strategies, such as blocking relevant data or disparate impact liability, are modeled on human decision-making and cannot effectively regulate APD. The paper proposes a regulatory framework targeting APD based on data and algorithmic aspects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call