This study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay. This view is so widespread that we can identify it as a certain typical cognitive distortion or bias. This study presents a summary of facts concerning the Tay case, collaborative perspectives from eminent experts: (1) Tay did not mean anything by its morally objectionable statements because, in principle, it was not able to think; (2) the controversial content spread by this AI was interpreted incorrectly—not as a mere compilation of meaning (parroting), but as its disclosure; (3) even though chatbots are not members of the symbolic order of spatiotemporal relations of the human world, we treat them in this way in many aspects.
Read full abstract