Abstract

Various actors employ the notion of risk when they discuss the future role of Artificial Intelligence (AI) in society – sometimes as a general pointer to possible unwanted consequences of the underlying technologies, sometimes oriented towards a political regulation of AI risks. Mostly discussed within a legal or ethical framework, we still lack a perspective on AI risks based on sociological risk research. Building on systems-theoretical thinking about risk and society, this article analyses the potential and limits of a risk-based regulation of AI, in particular with regard to the notion of harm to fundamental rights. Drawing on the AI Act, its earlier drafts and related documents, the paper analyses how this regulatory framework delineates harms of AI and which implications the chosen delineation has for the regulation. The results show that fundamental rights are invoked as legal rules, as values and as a foundation for trustworthiness of AI in parallel to being identified as at risk from AI. The attempt to frame all possible harms in terms of fundamental rights creates communicative paradoxes. It opens the door to a political classification of high-risk AI systems as well as a future standard-setting that is removed from systematic concerns about fundamental rights and values. The additional notion of systemic risk, addressing possible risks from general-purpose AI models, further reveals the problems with delineating harms of AI. In sum, the AI Act is unlikely to achieve what it aims to do, namely the creation of conditions for trustworthy AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call