AimEvaluation of the quality of dental information produced by the ChatGPT artificial intelligence language model within the context of oral surgery, preventive dentistry, and oral cancer. MethodologyThis study adopted quantitative methods approach. The experts prepared 50 questions (including dimensions of, risk factors, preventive measures, diagnostic methods, and treatment options) that would be presented to ChatGPT, and its responses were rated for their accuracy, completeness, relevance, clarity or comprehensibility, and possible risks using a standardized rubric. To carry out the assessment of the responses by ChatGPT, a standardized scoring rubric was used. Evaluation process included feedback concerning the strengths, weaknesses, and potential areas of improvement in the responses provided by ChatGPT. ResultsWhile achieving the highest score for preventive dentistry at 4.3/5 and being able to communicate the complex information coherently, the tool showed lower accuracy for oral surgery and oral cancer, scoring 3.9/5 and 3.6/5, respectively, with several gaps for post-operative instructions, personalized risk assessments, and specialized diagnostic methods. Potential risks, such as a lack of individualized advice, were shown in 53% of the oral cancer and in 40% of the oral surgery. While showing promise in some domains, ChatGPT had important limitations in specialized areas that require nuanced expertise. ConclusionThe findings point to the need for professional supervision while using AI-generated information and ongoing evaluation as capabilities evolve, for the assurance of responsible implementation in the best interest of patient care.
Read full abstract