Abstract

Automated decision-making (ADM) is already used across a variety of societal contexts,1 from simplistic models that help online service providers to carry out operations on behalf of their users, for instance for billing purposes, or to create a better functioning social network2—to more complex profiling algorithms that filter systems for targeted advertisements, credit scoring, recommender systems, IoT-applications, insurance proposals, health care applications, or examination to enter education or training.3 Meanwhile, dynamics of ADM may collide with and exert pressure on fundamental rights.4 For instance, Kramer and others described how Facebook experimented with its algorithm that organizes news feeds of users to test how different fine-tunings in the application of the algorithm may impact behaviour and emotions of users.5 Such use of ADM (determining on the basis of personal data that was targeted with emotionally charged messages) may impact one’s privacy or freedom of thought and conscience.6 Searches in Google’s search engine with ADM for African-American names were more likely to show advertisements suggesting that the person had an arrest record.7 Such use of personal data in ADM can lead to a discriminatory application of ADM. On group or societal scale, such applications of ADM may result in social stratification.8

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call