Abstract

To evaluate the performance of ChatGPT in a French medical school entrance examination. A cross-sectional study using a consecutive sample of text-based multiple-choice practice questions for the Parcours d'Accès Spécifique Santé. ChatGPT answered questions in French. We compared performance of ChatGPT in obstetrics and gynecology (OBGYN) and in the whole test. Overall, 885 questions were evaluated. The mean test score was 34.0% (306; maximal score of 900). The performance of ChatGPT was 33.0% (292 correct answers, 885 questions). The performance of ChatGPT was lower in biostatistics (13.3% ± 19.7%) than in anatomy (34.2% ± 17.9%; P = 0.037) and also lower than in histology and embryology (40.0% ± 18.5%; P = 0.004). The OBGYN part had 290 questions. There was no difference in the test scores and the performance of ChatGPT in OBGYN versus the whole entrance test (P = 0.76 vs P = 0.10, respectively). ChatGPT answered one-third of questions correctly in the French test preparation. The performance in OBGYN was similar.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.