Abstract

ABSTRACT Introduction The use of artificial intelligence technology is progressively expanding and advancing in the health and biomedical literature. Since its launch, ChatGPT has rapidly gained popularity and become one of the fastest-growing artificial intelligence applications in history. This study evaluated the accuracy and comprehensiveness of ChatGPT-generated responses to medical queries in clinical neurology. Methods We directed 216 questions from different subspecialties to ChatGPT. The questions were classified into three categories: multiple-choice, descriptive, and binary (yes/no answers). Each question in all categories was subjectively rated as easy, medium, or hard according to its difficulty level. Questions that also tested for intuitive clinical thinking and reasoning ability were evaluated in a separate category. Results ChatGPT correctly answered 141 questions (65.3%). No significant difference was detected in the accuracy and comprehensiveness scale scores or correct answer rates in comparisons made according to the question style or difficulty level. However, a comparative analysis assessing question characteristics revealed significantly lower accuracy and comprehensiveness scale scores and correct answer rates for questions based on interpretations that required critical thinking (p = 0.007, 0.007, and 0.001, respectively). Conclusion ChatGPT had a moderate overall performance in clinical neurology and demonstrated inadequate performance in answering questions that required interpretation and critical thinking. It also displayed limited performance in specific subspecialties. It is essential to acknowledge the limitations of artificial intelligence and diligently verify medical information produced by such models using reliable sources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call