Abstract
AbstractThis paper critically examines the political implications of Large Language Models (LLMs), focusing on the individual and collective ability to engage in political practices. The advent of AI-based chatbots powered by LLMs has sparked debates on their democratic implications. These debates typically focus on how LLMS spread misinformation and thus hinder the evaluative skills of people essential for informed decision-making and deliberation. This paper suggests that beyond the spread of misinformation, the political significance of LLMs extends to the core of political subjectivity and action. It explores how LLMs contribute to political de-skilling by influencing the capacities of critical engagement and collective action. Put differently, we explore how LLMs shape political subjectivity. We draw from Arendt’s distinction between speech and language and Foucault’s work on counter-conduct to articulate in what sense LLMs give rise to political de-skilling, and hence pose a threat to political subjectivity. The paper concludes by considering how to reconcile the impact of LLMs on political agency without succumbing to technological determinism, and by pointing to how the practice of parrhesia enables one to form one’s political subjectivity in relation to LLMs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.