Abstract

This study assessed the consistency and accuracy of responses provided by two artificial intelligence (AI) applications, ChatGPT and Google Bard (Gemini), to questions related to dental trauma. Based on the International Association of Dental Traumatology guidelines, 25 dichotomous (yes/no) questions were posed to ChatGPT and Google Bard over 10 days. The responses were recorded and compared with the correct answers. Statistical analyses, including Fleiss kappa, were conducted to determine the agreement and consistency of the responses. Analysis of 4500 responses revealed that both applications provided correct answers to 57.5% of the questions. Google Bard demonstrated a moderate level of agreement, with varying rates of incorrect answers and referrals to physicians. Although ChatGPT and Google Bard are potential knowledge resources, their consistency and accuracy in responding to dental trauma queries remain limited. Further research involving specially trained AI models in endodontics is warranted to assess their suitability for clinical use.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call