Mature information societies are characterised by mass production of data that provide insight into human behaviour. Analytics (as in big data analytics) has arisen as a practice to make sense of the data trails generated through interactions with networked devices, platforms and organisations. Persistent knowledge describing the behaviours and characteristics of people can be constructed over time, linking individuals into groups or classes of interest to the platform. Analytics allows for a new type of algorithmically assembled group to be formed that does not necessarily align with classes or attributes already protected by privacy and anti-discrimination law or addressed in fairness- and discrimination-aware analytics. Individuals are linked according to offline identifiers (e.g. age, ethnicity, geographical location) and shared behavioural identity tokens, allowing for predictions and decisions to be taken at a group rather than individual level. This article examines the ethical significance of such ad hoc groups in analytics and argues that the privacy interests of algorithmically assembled groups in inviolate personality must be recognised alongside individual privacy rights. Algorithmically grouped individuals have a collective interest in the creation of information about the group, and actions taken on its behalf. Group privacy is proposed as a third interest to balance alongside individual privacy and social, commercial and epistemic benefits when assessing the ethical acceptability of analytics platforms.
Read full abstract