Abstract
Differentiation is often intrinsic to the functioning of algorithms. Within large data sets, ‘differentiating grounds’, such as correlations or patterns, are found, which in turn, can be applied by decision-makers to distinguish between individuals or groups of individuals. As the use of algorithms becomes more wide-spread, the chance that algorithmic forms of differentiation result in unfair outcomes increases. Intuitively, certain (random) algorithmic, classification acts, and the decisions that are based on them, seem to run counter to the fundamental notion of equality. It nevertheless remains difficult to articulate why exactly we find certain forms of algorithmic differentiation fair or unfair, vis-a-vis the general principle of equality. Concentrating on Dworkin’s notions brute and option luck, this discussion paper presents a luck egalitarian perspective as a potential approach for making this evaluation possible. The paper then considers whether this perspective can also inform us with regard to the interpretation of EU data protection legislation, and the General Data Protection Regulation in particular. Considering data protection’s direct focus on the data processes underlying algorithms, the GDPR might, when informed by egalitarian notions, form a more practically feasible way of governing algorithmic inequalities.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.