This study evaluates and compares the performance of ChatGPT-3.5, ChatGPT-4 Omni (4o), Google Bard, and Microsoft Copilot in responding to text-based multiple-choice questions related to oral radiology, as featured in the Dental Specialty Admission Exam conducted in Türkiye. A collection of text-based multiple-choice questions was sourced from the open-access question bank of the Turkish Dental Specialty Admission Exam, covering the years 2012 to 2021. The study included 123 questions, each with five options and one correct answer. The accuracy levels of ChatGPT-3.5, ChatGPT-4o, Google Bard, and Microsoft Copilot were compared using descriptive statistics, the Kruskal-Wallis test, Dunn's post hoc test, and Cochran's Q test. The accuracy of the responses generated by the four chatbots exhibited statistically significant differences (p = 0.000). ChatGPT-4o achieved the highest accuracy at 86.1%, followed by Google Bard at 61.8%. ChatGPT-3.5 demonstrated an accuracy rate of 43.9%, while Microsoft Copilot recorded a rate of 41.5%. ChatGPT-4o showcases superior accuracy and advanced reasoning capabilities, positioning it as a promising educational tool. With regular updates, it has the potential to serve as a reliable source of information for both healthcare professionals and the general public. Not applicable.
Read full abstract