Abstract

Background: The growing interest in artificial intelligence (AI) has spurred an increase in the availability of Large Language Models (LLMs) in surgical education. These LLMs hold the potential to augment medical curricula for future healthcare professionals, facilitating engagement in remote learning experiences, and assisting in personalised student feedback. Objectives: To evaluate the ability of LLMs to assist junior doctors in providing advice for common ward-based surgical scenarios with increasing complexity. Methods: Utilising an instrumental case study approach, this study explored the potential of LLMs by comparing the responses of the ChatGPT-4, BingAI and BARD. LLMs were prompted by 3 common ward-based surgical scenarios and tasked with assisting junior doctors in clinical decision-making. The outputs were assessed by a panel of two senior surgeons with extensive experience in AI and education, qualitatively utilising a Likert scale on their accuracy, safety, and effectiveness to determine their viability as a synergistic tool in surgical education. A quantitative assessment of their reliability and readability was conducted using the DISCERN score and a set of reading scores, including the Flesch Reading Ease Score, Flesch-Kincaid Grade Level, and Coleman-Liau index. Results: BARD proved superior in readability, with Flesch Reading Ease Score 50.13 (± 5.00), Flesch-Kincaid Grade Level 9.33 (± 0.76), and Coleman-Liau index 11.67 (± 0.58). ChatGPT-4 outperformed BARD and BingAI, with the highest DISCERN score of 71.7 (± 2.52). Using a Likert scale-based framework, the surgical expert panel further affirmed that the advice provided by the ChatGPT-4 was suitable and safe for first-year interns and residents. A t-test showed statistical significance in reliability among all three AIs (P < 0.05) and readability only between the ChatGPT-4 and BARD. This study underscores the potential for LLM integration in surgical education, particularly ChatGPT, in the provision of reliable and accurate information. Conclusions: This study highlighted the potential of LLM, specifically ChatGPT-4, as a valuable educational resource for junior doctors. The findings are limited by the potential of non-generalizability of the use of junior doctors' simulated scenarios. Future work should aim to optimise learning experiences and better support surgical trainees. Particular attention should be paid to addressing the longitudinal impact of LLMs, refining AI models, validating AI content, and exploring technological amalgamations for improved outcomes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call