Abstract

ObjectivesTo examine the quality, reliability, readability, and usefulness of ChatGPT in promoting oral cancer early detection. Study design108 patients-oriented questions about oral cancer early detection were compiled from expert panel, professional societies, and web-based tools. Questions were categorized into four topic domains and ChatGPT 3.5 was asked each question independently. ChatGPT answers were evaluated regarding quality, readability, actionability, and usefulness using. Two experienced reviewers independently assessed each response. ResultsQuestions related to clinical appearance constituted 36.1% (n=39) of the total questions. ChatGPT provided “very useful” responses to the majority of questions (75%; n=81). The mean Global Quality Score was 4.24 ±1.3 out of 5. The mean reliability score was 23.17 ± 9.87 out of 25. The mean understandability score was 76.6% ±25.9 out of 100, while the mean actionability score was 47.3% ± 18.9 out of 100. The mean FKS reading ease score was 38.4%± 29.9, while the mean SMOG index readability score was 11.65 ± 8.4. No misleading information was identified among ChatGPT responses. ConclusionChatGPT is an attractive and potentially useful resource for informing patients about early detection of oral cancer. Nevertheless, concerns do exist about readability and actionability of the offered information.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.