Abstract

This paper discusses various forms and sources of algorithmic discrimination. In particular, we explore the connection between – at first glance – ‘voluntary’ sharing or selling of one’s data on the one hand and potential risks of automated decision-making based on big data and artificial intelligence on the other. We argue that the implementation of algorithm-driven profiling or decision-making mechanisms will, in many cases, disproportionately disadvantage certain vulnerable groups that are already disadvantaged by many existing datafication practices. We call into question the voluntariness of these mechanisms, especially for certain vulnerable groups, and claim that members of such groups are oftentimes more likely to give away their data. If these existing datafication practices exacerbate prior disadvantages, they ‘compound historical injustices’ (Hellman, 2018) and thereby constitute forms of morally wrong discrimination. To make matters worse, they are even more prone to further algorithmic discriminations based on the additional data collected from them.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.