Abstract

As recently highlighted in the New England Journal of Medicine,1,2 artificial intelligence (AI) has the potential to revolutionize the field of medicine. While AI undoubtedly represents a set of extremely powerful technologies, it is not infallible. Accordingly, in their illustrative paper on potential medical applications of the recently launched large language model GPT-4, Lee et al. point out that chatbot applications for this AI-driven large language model occasionally produce false responses and that “A false response by GPT-4 is sometimes referred to as a ‘hallucination,’.”1 Indeed, it has become standard in AI to refer to a response that is not justified by the training data as a hallucination.3 We find this terminology to be problematic for the following 2 reasons: It is not constructive to merely criticize a terminology without providing an alternative. Therefore, given the topic and the timing, we sought advice from AI. Specifically, we first turned to GPT-3.5—the proverbial predecessor of GPT-4:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call