ChatGPT by OpenAI is an AI model designed to generate human-like responses based on diverse datasets. Our study evaluated ChatGPT-3.5’s capability to generate pharmacology multiple-choice questions adhering to the NBME guidelines for USMLE Step exams. The initial findings show ChatGPT’s rapid adoption and potential in healthcare education and practice. However, concerns about its accuracy and depth of understanding prompted this evaluation. Using a structured prompt engineering process, ChatGPT was tasked to generate questions across various organ systems, which were then reviewed by pharmacology experts. ChatGPT consistently met the NBME criteria, achieving an average score of 13.7 out of 16 (85.6%) from expert 1 and 14.5 out of 16 (90.6%) from expert 2, with a combined average of 14.1 out of 16 (88.1%) (Kappa coefficient = 0.76). Despite these high scores, challenges in medical accuracy and depth were noted, often producing “pseudo vignettes” instead of in-depth clinical questions. ChatGPT-3.5 shows potential for generating NBME-style questions, but improvements in medical accuracy and understanding are crucial for its reliable use in medical education. This study underscores the need for AI models tailored to the medical domain to enhance educational tools for medical students.
Read full abstract