Abstract

INTRODUCTION: Artificial intelligence (AI) and machine learning (ML) have transformed healthcare, with applications in various specialized medical fields. Neurosugery can benefit from AI in surgical planning, predicting patient outcomes, and analyzing neuroimaging data. GPT-4, an updated advanced language model by OpenAI with additional training parameters, has exhibited exceptional performance on standardized exams. METHODS: GPT-4's performance was examined on 643 Congress of Neurological Suregons (CNS) Self-Assessment Neurosurgery Exam (SANS) board-style questions from various neurosurgery subspecialities. Of those, 477 were text-based and 166 contained images. GPT-4 refused to answer 52 questions that contained no text. The remaining 591 questions were inputted into GPT-4, and its performance was evaluated based on first-time responses. Raw scores were analyzed across subspecialties and questions types, then compared to previous findings on ChatGPT performance against SANS users, medical students, and neurosurgery residents. RESULTS: GPT-4 attempted 91.9% of CNS SANS questions and achieved 76.6% accuracy. The model's accuracy increased to 79.0% for text-only questions. GPT-4 outperformed ChatGPT across all neurosurgery categories, scoring highest in Pain/Peripheral Nerve (84%) and lowest in Spine (73%). The model exceeded the performance of medical students (26.3%), neurosurgery residents (61.5%), and the national average of SANS users (69.3%) across all categories. CONCLUSIONS: GPT-4 outperformed medical students, neurosurgery residents, and the national average of SANS users. The model's notable accuracy suggests potential applications in educational settings and clinical decision-making, enhancing provider efficicency, and improving patient care.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call