Abstract
This paper has three main goals: (1) to clarify the role of artificial intelligence (AI)—along with algorithms more broadly—in online radicalization that results in “real world violence,” (2) to argue that technological solutions (like better AI) are inadequate proposals for this problem given both technical and social reasons, and (3) to demonstrate that platform companies’ (e.g., Meta, Google) statements of preference for technological solutions functions as a type of propaganda that serves to erase the work of the thousands of human content moderators and to conceal the harms they experience. I argue that the proper assessment of these important, related issues must be free of the obfuscation that the “better AI” proposal generates. For this reason, I describe the AI-centric solutions favoured by major platform companies as a type of obfuscating and dehumanizing propaganda.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have