Abstract

In recent years, ChatGPT has become a popular source of information online. Physicians need to be aware of the resources their patients are using to self-inform of their conditions. This study investigates physician-graded accuracy and completeness of ChatGPT regarding various questions patients are likely to ask the artificial intelligence (AI) system concerning common upper limb orthopedic conditions. ChatGPT 3.5 was interrogated concerning 5 common orthopedic hand conditions: carpal tunnel syndrome, Dupuytren contracture, De Quervain tenosynovitis, trigger finger, and carpal metacarpal arthritis. Questions evaluated conditions' symptoms, pathology, management, surgical indications, recovery time, insurance coverage, and workers' compensation possibility. Each topic had 12 to 15 questions and was established as its own ChatGPT conversation. All questions regarding the same diagnosis were presented to the AI, and its answers were recorded. Each question was then graded for both accuracy (Likert scale of 1-6) and completeness (Likert scale of 1-3) by 10 fellowship trained hand surgeons. Descriptive statistics were performed. Overall, the mean accuracy score for ChatGPT's answers to common orthopedic hand diagnoses was 4.83 out of 6 ± 0.95. The mean completeness of answers was 2 out of 3 ± 0.59. Easily accessible online AI such as ChatGPT is becoming more advanced and thus more reliable in its ability to answer common medical questions. Physicians can anticipate such online resources being mostly correct, however incomplete. Patients should beware of relying on such resources in isolation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call