Abstract

PurposeTo examine and compare ChatGPT versus Google websites in answering common head and neck cancer questions. Materials and methodsCommonly asked questions about head and neck cancer were obtained and inputted into both ChatGPT-4 and Google search engine. For each question, the ChatGPT response and first website search result were compiled and examined. Content quality was assessed by independent reviewers using standardized grading criteria and the modified Ensuring Quality Information for Patients (EQIP) tool. Readability was determined using the Flesch reading ease scale. ResultsIn total, 49 questions related to head and neck cancer were included. Google sources were on average significantly higher quality than ChatGPT responses (4.2 vs 3.6, p = 0.005). According to the EQIP tool, Google and ChatGPT had on average similar response rates per criterion (24.4 vs 20.5, p = 0.09) while Google had a significantly higher average score per question than ChatGPT (13.8 vs 11.7, p < 0.001) According to the Flesch reading ease scale, ChatGPT and Google sources were both considered similarly difficult to read (33.1 vs 37.0, p = 0.180) and at a college level (14.3 vs 14.2, p = 0.820.) ConclusionChatGPT responses were as challenging to read as Google sources, but poorer quality due to decreased reliability and accuracy in answering questions. Though promising, ChatGPT in its current form should not be considered dependable. Google sources are a preferred resource for patient educational materials.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call