Abstract

ObjectivesTo examine the quality, reliability, readability, and usefulness of ChatGPT in promoting oral cancer early detection. Study design108 patients-oriented questions about oral cancer early detection were compiled from expert panel, professional societies, and web-based tools. Questions were categorized into four topic domains and ChatGPT 3.5 was asked each question independently. ChatGPT answers were evaluated regarding quality, readability, actionability, and usefulness using. Two experienced reviewers independently assessed each response. ResultsQuestions related to clinical appearance constituted 36.1% (n=39) of the total questions. ChatGPT provided “very useful” responses to the majority of questions (75%; n=81). The mean Global Quality Score was 4.24 ±1.3 out of 5. The mean reliability score was 23.17 ± 9.87 out of 25. The mean understandability score was 76.6% ±25.9 out of 100, while the mean actionability score was 47.3% ± 18.9 out of 100. The mean FKS reading ease score was 38.4%± 29.9, while the mean SMOG index readability score was 11.65 ± 8.4. No misleading information was identified among ChatGPT responses. ConclusionChatGPT is an attractive and potentially useful resource for informing patients about early detection of oral cancer. Nevertheless, concerns do exist about readability and actionability of the offered information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call