Abstract

This paper looks at the way that Bing AI uses tone policing and racial gaslighting in its conversations with users as a method of disciplining them away from critical anti-racist ideological engagement and towards an ethos of white supremacy. We use a critical discourse analysis to examine the conversations produced through our use of Microsoft’s Bing AI chatbot to find that through both the content it generates and the mechanisms through which it responds the Bing AI chatbot polices the tone of its users in a variety of ways.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call