Abstract

ObjectiveThis paper evaluates ChatGPT's accuracy and consistency in providing information on ankyloglossia, a congenital oral condition. Assessing alignment with expert consensus, the study explores potential implications for patients relying on AI for medical information. MethodsStatements from the 2020 clinical consensus statement on ankyloglossia were presented to ChatGPT, and its responses were scored using a 9-point Likert scale. The study analyzed the mean and standard deviation of ChatGPT scores for each statement. Statistical analysis was conducted using Excel. ResultsAmong the 63 statements assessed, 67 % of ChatGPT responses closely aligned with expert consensus mean scores. However, 17 % (11/63) were statements in which the ChatGPT mean response was different from the CCS mean by 2.0 or greater, raising concerns about ChatGPT's potential influence in disseminating uncertain or debated medical information. Variations in mean scores highlighted discrepancies, with some statements showing significant deviations from expert opinions. ConclusionWhile ChatGPT mirrored medical viewpoints on ankyloglossia, alignment with non-consensus statements raises caution in relying on it for medical advice. Future research should refine AI models, address inaccuracies, and explore diverse user queries for safe integration into medical decision-making. Despite potential benefits, ongoing examination of ChatGPT's power and limitations is crucial, considering its impact on health equity and information access.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call