Abstract

AbstractAutomated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.

Highlights

  • After decades of slumber, the world is awaking to the extraordinary power that we have vested in the custodians of our digital infrastructure

  • We introduce and offer a moral diagnosis of one of the primary engines of our contemporary digital infrastructure: Automated Influence, the use of automated systems to collect and analyse user data, and target interventions aimed at changing their behaviour

  • 4.3 Refusal and Resistance At the same time as collectively generating this new cooperative surplus of resources, knowledge and power, the systems of Automated Influence and the companies purveying it have worked to atomise individual consumers, reinforcing in us the mindset of individual choice and consent, and fragmenting our shared epistemic landscape (Salomé Viljoen, Goldenfein, and McGuigan 2020, 7). This is one of the great ironies of Automated Influence: it depends on an infrastructure that derives from a species of unthinking collective action, but which enables a kind of personalisation, and an ideology of individualism, that fragments us such that we become worse at engaging in considered collective action to undertake collective bargaining with the tech companies

Read more

Summary

Introduction

The world is awaking to the extraordinary power that we have vested in the custodians of our digital infrastructure. The structural approach evaluates the emergent social structures of which those interactions are the leading edge.5 It considers how those social structures directly and indirectly impact people's lives, as well as their relational properties, such as how they influence distributions of power, knowledge, and resources; and their aggregate. The prevailing critique of Automated Influence, especially in public discourse and in academic research, emphasises its interactional shortcomings This lends normative clarity and motivational force—you should oppose Automated Influence, because it is undermining your self-authenticating claims—we think an exclusively interactional approach misses crucially important structural dimensions of the problem with Automated Influence. This presents us with a more demanding challenge: to. We recommend the emerging field of 'AI Ethics' turn away from its present interactional focus, and towards a more structural agenda: a genuinely political philosophy of data and AI

Automated Influence Automated Influence
Privacy12
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.