Abstract

IntroductionArtificial Intelligence tools are being introduced in almost every field of human life, including medical sciences and medical education, among scepticism and enthusiasm. Research questionto assess how a generative language tool (Generative Pretrained Transformer 3.5, ChatGPT) performs at both generating questions and answering a neurosurgical residents’ written exam. Namely, to assess how ChatGPT generates questions, how it answers human-generated questions, how residents answer AI-generated questions and how AI answers its self-generated question. Materials and methods50 questions were included in the written exam, 46 questions were generated by humans (senior staff members) and 4 were generated by ChatGPT. 11 participants took the exam (ChatGPT and 10 residents). Questions were both open-ended and multiple-choice.8 questions were not submitted to ChatGPT since they contained images or schematic drawings to interpret. Resultsformulating requests to ChatGPT required an iterative process to precise both questions and answers. Chat GPT scored among the lowest ranks (9/11) among all the participants). There was no difference in response rate for residents’ between human-generated vs AI-generated questions that could have been attributed to less clarity of the question. ChatGPT answered correctly to all its self-generated questions. Discussion and conclusionsAI is a promising and powerful tool for medical education and for specific medical purposes, which need to be further determined. To request AI to generate logical and sound questions, that request must be formulated as precise as possible, framing the content, the type of question and its correct answers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call