Abstract

Algorithmic discrimination poses an increased risk to the legal principle of equality. Scholarly accounts of this challenge are emerging in the context of EU equality law, but the question of the resilience of the legal framework has not yet been addressed in depth. Exploring three central incompatibilities between the conceptual map of EU equality law and algorithmic discrimination, this article investigates how purposively revisiting selected conceptual and doctrinal tenets of EU non-discrimination law offers pathways towards enhancing its effectiveness and resilience. First, I argue that predictive analytics are likely to give rise to intersectional forms of discrimination, which challenge the unidimensional understanding of discrimination prevalent in EU law. Second, I show how proxy discrimination in the context of machine learning questions the grammar of EU non-discrimination law. Finally, I address the risk that new patterns of systemic discrimination emerge in the algorithmic society. Throughout the article, I show that looking at the margins of the conceptual and doctrinal map of EU equality law offers several pathways to tackling algorithmic discrimination. This exercise is particularly important with a view to securing a technology-neutral legal framework robust enough to provide an effective remedy to algorithmic threats to fundamental rights.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.