Abstract

Chatbot Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence-powered language model chatbot able to help otolaryngologists in practice and research. The ability of ChatGPT in generating patient-centered information related to laryngopharyngeal reflux disease (LPRD) was evaluated. Twenty-five questions dedicated to definition, clinical presentation, diagnosis, and treatment of LPRD were developed from the Dubai definition and management of LPRD consensus and recent reviews. Questions about the four aforementioned categories were entered into ChatGPT-4. Four board-certified laryngologists evaluated the accuracy of ChatGPT-4 with a 5-point Likert scale. Interrater reliability was evaluated. The mean scores (SD) of ChatGPT-4 answers for definition, clinical presentation, additional examination, and treatments were 4.13 (0.52), 4.50 (0.72), 3.75 (0.61), and 4.18 (0.47), respectively. Experts reported high interrater reliability for sub-scores (ICC = 0.973). The lowest performances of ChatGPT-4 were on answers about the most prevalent LPR signs, the most reliable objective tool for the diagnosis (hypopharyngeal-esophageal multichannel intraluminal impedance-pH monitoring (HEMII-pH)), and the criteria for the diagnosis of LPR using HEMII-pH. ChatGPT-4 may provide adequate information on the definition of LPR, differences compared to GERD (gastroesophageal reflux disease), and clinical presentation. Information provided upon extra-laryngeal manifestations and HEMII-pH may need further optimization. Regarding the recent trends identifying increasing patient use of internet sources for self-education, the findings of the present study may help draw attention to ChatGPT-4's accuracy on the topic of LPR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call