Abstract

People rely on data-driven AI technologies nearly every time they go online, whether they are shopping, scrolling through news feeds, or looking for entertainment. Yet despite their ubiquity, personalization algorithms and the associated large-scale collection of personal data have largely escaped public scrutiny. Policy makers who wish to introduce regulations that respect people’s attitudes towards privacy and algorithmic personalization on the Internet would greatly benefit from knowing how people perceive personalization and personal data collection. To contribute to an empirical foundation for this knowledge, we surveyed public attitudes towards key aspects of algorithmic personalization and people’s data privacy concerns and behavior using representative online samples in Germany (N = 1065), Great Britain (N = 1092), and the United States (N = 1059). Our findings show that people object to the collection and use of sensitive personal information and to the personalization of political campaigning and, in Germany and Great Britain, to the personalization of news sources. Encouragingly, attitudes are independent of political preferences: People across the political spectrum share the same concerns about their data privacy and show similar levels of acceptance regarding personalized digital services and the use of private data for personalization. We also found an acceptability gap: People are more accepting of personalized services than of the collection of personal data and information required for these services. A large majority of respondents rated, on average, personalized services as more acceptable than the collection of personal information or data. The acceptability gap can be observed at both the aggregate and the individual level. Across countries, between 64% and 75% of respondents showed an acceptability gap. Our findings suggest a need for transparent algorithmic personalization that minimizes use of personal data, respects people’s preferences on personalization, is easy to adjust, and does not extend to political advertising.

Highlights

  • The online experience of billions of people is shaped by machine-learning algorithms and other types of artificial intelligence (AI) technologies

  • Respondents were partially familiar with AI-related concepts and key entities: They knew that algorithms are employed online, and that algorithms are used to curate social media feeds

  • People accept personalized commercial services, they object to the use of the personal data and sensitive information that is currently collected for personalization

Read more

Summary

Introduction

The online experience of billions of people is shaped by machine-learning algorithms and other types of artificial intelligence (AI) technologies. These self-learning programs include a variety of algorithmic tools that harvest and process people’s personal data in order to customize and mediate information online, in, for example, personalized social media feeds, targeted advertising, recommender systems, and algorithmic filtering in search engines (for more examples see Table B1 in Appendix B). There is substantial concern that personalized political messages containing false claims influenced both the Brexit referendum and the U.S presidential election in 2016 (Digital, Culture, Media and Sport Committee, 2019; Persily, 2017). There have been growing concerns that the combination of algorithmic filtering and opinion dynamics on social media networks have fostered the spread of false information about the COVID-19 pandemic and governments’ responses to it, thereby reinforcing dangerous beliefs and conspiracy narratives (Cinelli et al, 2020; Thompson and Warzel, 2021; Zarocostas, 2020) and potentially hampering an efficient public response

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call