Abstract
To assess the responses of the ChatGPT-4, the forerunner artificial intelligence-based chatbot, to frequently asked questions regarding two common pediatric ophthalmologic disorders, amblyopia and childhood myopia. Twenty-seven questions about amblyopia and 28 questions about childhood myopia were asked of the ChatGPT twice (totally 110 questions). The responses were evaluated by two pediatric ophthalmologists as acceptable, incomplete, or unacceptable. There was remarkable agreement (96.4%) between the two pediatric ophthalmologists on their assessment of the responses. Acceptable responses were provided by the ChatGPT to 93 of 110 (84.6%) questions in total (44 of 54 [81.5%] for amblyopia and 49 of 56 [87.5%] questions for childhood myopia). Seven of 54 (12.9%) responses to questions on amblyopia were graded as incomplete compared to 4 of 56 (7.1%) of questions on childhood myopia. The ChatGPT gave inappropriate responses to three questions about amblyopia (5.6%) and childhood myopia (5.4%). The most noticeable inappropriate responses were related to the definition of reverse amblyopia and the threshold of refractive error for prescription of spectacles to children with myopia. The ChatGPT has the potential to serve as an adjunct informational tool for pediatric ophthalmology patients and their caregivers by demonstrating a relatively good performance in answering 84.6% of the most frequently asked questions about amblyopia and childhood myopia. [J Pediatr Ophthalmol Strabismus. 2024;61(2):86-89.].
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have