Abstract

To assess the performance of "Bard," one of ChatGPT's competitors, in answering practice questions for the ophthalmology board certification exam. In December 2023, 250 multiple-choice questions from the "BoardVitals" ophthalmology exam question bank were randomly selected and entered into Bard to assess the artificial intelligence chatbot's ability to comprehend, process, and answer complex scientific and clinical ophthalmic questions. A random mix of text-only and image-and-text questions were selected from 10 subsections. Each subsection included 25 questions. The percentage of correct responses was calculated per section, and an overall assessment score was determined. On average, Bard answered 62.4% (156/250) of questions correctly. The worst performance was 24% (6/25)on the topic of "Retina and Vitreous," and the best performance was on "Oculoplastics," with a score of 84% (21/25). While the majority of questions were entered with minimal difficulty, not all questions could be processed by Bard. This was particularly an issue for questions that included human images and multiple visual files. Some vignette-style questions were also not understood by Bard and were therefore omitted. Future investigations will focus on having more questions per subsection to increase available data points. While Bard answered the majority of questions correctly and is capable of analyzing vast amounts of medical data, it ultimately lacks the holistic understanding and experience-informed knowledge of an ophthalmologist. An ophthalmologist's ability to synthesize diverse pieces of information and draw from clinical experience to answer complex standardized board questions is at present irreplaceable, and artificial intelligence, in its current form, can be employed as a valuable tool for supplementing clinicians' study methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call