Abstract

BackgroundTimely access to human expertise for affordable and efficient triage of ophthalmic conditions is inconsistent. With recent advancements in publicly available artificial intelligence (AI) chatbots, the lay public may turn to these tools for triage of ophthalmic complaints. Validation studies are necessary to evaluate the performance of AI chatbots as triage tools and inform the public regarding their safety. ObjectiveTo evaluate the triage performance of AI chatbots for ophthalmic conditions. DesignCross-sectional study. SettingSingle centre. ParticipantsOphthalmology trainees, OpenAI ChatGPT (GPT-4), Bing Chat, and WebMD Symptom Checker. MethodsForty-four clinical vignettes representing common ophthalmic complaints were developed, and a standardized pathway of prompts was presented to each tool in March 2023. Primary outcomes were proportion of responses with the correct diagnosis listed in the top 3 possible diagnoses and proportion with correct triage urgency. Ancillary outcomes included presence of grossly inaccurate statements, mean reading grade level, mean response word count, proportion with attribution, and most common sources cited. ResultsThe ophthalmologists in training, ChatGPT, Bing Chat, and the WebMD Symptom Checker listed the appropriate diagnosis among the top 3 suggestions in 42 (95%), 41 (93%), 34 (77%), and 8 (33%) cases, respectively. Triage urgency was appropriate in 38 (86%), 43 (98%), and 37 (84%) cases for ophthalmology trainees, ChatGPT, and Bing Chat, correspondingly. ConclusionsChatGPT using the GPT-4 model offered high diagnostic and triage accuracy that was comparable with that of ophthalmology trainees with no grossly inaccurate statements. Bing Chat had lower accuracy and a tendency to overestimate triage urgency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call