Abstract

The term hallucination is used in AI discourse to describe AI-generated outputs that are unfounded and lack backing in input data, a phenomenon which occurs frequently enough for the academic community to shun widespread collaboration with AI writing tools, particularly in certain disciplines. The issue of hallucinatory outputs will diminish as AI advances, but the appropriateness of using the terms hallucinate and hallucination in this context remains under debate. Originating from Latin, terms related to hallucination were once exclusively used within specialized medical terminology. Over time, and across languages, its metaphorical meaning has evolved, becoming part of colloquial language through the semantic innovation characteristic of youth language. We also show how this trend coincides with the use of metaphors such as anthrophomorphisms in scientific discourse. The newest addition to hallucination’s list of meanings – an established metaphor for a new phenomenon – is contentious. On the one hand, there is a similarity in that both AI-induced and medically induced hallucinations are elusive and difficult to identify. On the other hand, it suggests a trivialization of medical conditions that remain significantly stigmatized in contemporary societies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call