Abstract

Techniques of artificial intelligence (AI) are increasingly used in the treatment of patients, such as providing adiagnosis in radiological imaging, improving workflow by triaging patients or providing an expert opinion based on clinical symptoms; however, such AI techniques also hold intrinsic risks as AI algorithms may point in the wrong direction and constitute ablack box without explaining the reason for the decision-making process.This article outlines acase where an erroneous ChatGPT diagnosis, relied upon by the patient to evaluate symptoms, led to asignificant treatment delay and apotentially life-threatening situation. With this case, we would like to point out the typical risks posed by the widespread application of AI tools not intended for medical decision-making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call