Abstract

To assess the accuracy of ChatGPT answers concerning orthodontic clear aligners. A cross-sectional content analysis of ChatGPT generated responses to queries related to clear aligner treatment (CAT) was undertaken. A total of 111 questions were generated by three orthodontists based on a set of predefined domains and subdomains. The artificial intelligence (AI)-generated (ChatGPT) answers were extracted and their accuracy was determined independently by five orthodontists. The accuracy of answers was assessed using a prepiloted four-point scale scoring rubric. Descriptive statistics were performed. The total mean accuracy score for the entire set was 2.6 ± 1.1. It was noted that 58% of the AI-generated answers were scored as objectively true, 18% were selected facts, 9% were minimal facts, and 15% were false. False claims included the ability of CAT to reduce the need for orthognathic surgery (4.0 ± 0.0), improve airway function (3.8 ± 0.5), achieve root parallelism (3.6 ± 0.5), alleviate sleep apnea (3.8 ± 0.5), and produce more stable results compared to fixed appliances (3.8 ± 0.5). The overall level of accuracy of ChatGPT responses to questions concerning CAT was suboptimal and lacked citations to relevant literature. Ability of the software to offer current and precise information was limited. Therefore, clinicians and patients must be mindful of false claims and relevant facts omitted in the answers generated by ChatGPT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call