Chatbots have been increasingly used as a source of patient education. This study aimed to compare the answers of ChatGPT-4 and Google Gemini to common questions on benign anal conditions in terms of appropriateness, comprehensiveness, and language level. Each chatbot was asked a set of 30 questions on hemorrhoidal disease, anal fissures, and anal fistulas. The responses were assessed for appropriateness, comprehensiveness, and reference provision. The assessments were made by three subject experts who were unaware of the name of the chatbots. The language level of the chatbot answers was assessed using the Flesch-Kincaid Reading Ease score and grade level. Overall, the answers provided by both models were appropriate and comprehensive. The answers of Google Gemini were more appropriate, comprehensive, and supported by references compared with the answers of ChatGPT. In addition, the agreement among the assessors on the appropriateness of Google Gemini answers was higher, attesting to a higher consistency. ChatGPT had a significantly higher Flesh-Kincaid grade level than Google Gemini (12.3 versus 10.6, p = 0.015), but a similar median Flesh-Kincaid Ease score. The answers of Google Gemini to questions on common benign anal conditions were more appropriate and comprehensive, and more often supported with references, than the answers of ChatGPT. The answers of both chatbots were at grade levels higher than the 6th grade level, which may be difficult for nonmedical individuals to comprehend.
Read full abstract