Abstract

Data analytics and data-driven approaches in Machine Learning are now among the most hailed computing technologies in many industrial domains. One major application is predictive analytics, which is used to predict sensitive attributes, future behavior, or cost, risk and utility functions associated with target groups or individuals based on large sets of behavioral and usage data. This paper stresses the severe ethical and data protection implications of predictive analytics if it is used to predict sensitive information about single individuals or treat individuals differently based on the data many unrelated individuals provided. To tackle these concerns in an applied ethics, first, the paper introduces the concept of “predictive privacy” to formulate an ethical principle protecting individuals and groups against differential treatment based on Machine Learning and Big Data analytics. Secondly, it analyses the typical data processing cycle of predictive systems to provide a step-by-step discussion of ethical implications, locating occurrences of predictive privacy violations. Thirdly, the paper sheds light on what is qualitatively new in the way predictive analytics challenges ethical principles such as human dignity and the (liberal) notion of individual privacy. These new challenges arise when predictive systems transform statistical inferences, which provide knowledge about the cohort of training data donors, into individual predictions, thereby crossing what I call the “prediction gap”. Finally, the paper summarizes that data protection in the age of predictive analytics is a collective mat- ter as we face situations where an individual’s (or group’s) privacy is violated using data other individuals provide about themselves, possibly even anonymously.

Highlights

  • Data analytics and data-driven approaches in Machine Learning (ML) are among the most hailed computing technologies in many industrial domains

  • One major application is the algorithmic prediction of human behavior, or human “fate”, if you will: Predictive Analytics (PA) leverages large behavioral data sets to classify individuals according to future risks, economic developments or expected costs and utility, as predicted from data correlations (O’Neil, 2016; Wachter & Mittelstadt, 2018; Mühlhoff, 2020a)

  • In the Conclusion, I will relate the concept of predictive privacy to the fundamental ethical principle of human dignity, discussing that there might be good reasons to abandon the use of PA completely, as technological means of making it ethically viable are rather limited

Read more

Summary

Introduction

Data analytics and data-driven approaches in Machine Learning (ML) are among the most hailed computing technologies in many industrial domains. The paper will dissect the various steps from data input to data output of predictive systems in order to discuss a spectrum of ethical concerns connected to each point in the data processing cycle. This step-by-step procedure is meant both as an academic contribution to a more nuanced understanding of the ethical challenges of PA and as a guide towards the operationalization of ethical thought in responsible implementations and political regulations of PA. In the Conclusion, I will relate the concept of predictive privacy to the fundamental ethical principle of human dignity, discussing that there might be good reasons to abandon the use of PA completely, as technological means of making it ethically viable are rather limited

Predictive systems and predictive privacy
How prediction is a challenge to privacy and data protection
Predictive privacy
Relation to other privacy conceptions
Ethical discussion step by step
Two types of unfair bias
Extending the minimal model
Findings
Conclusion and outlook
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call