Abstract

Conventional grading of dental students' projects in preclinical courses has mainly relied on visual evaluation by experienced instructors. The purpose of this study was to compare conventional visual grading in a dental anatomy course at one U.S. dental school to a novel digital assessment technique. A total of sixty samples comprised of two sets of faculty wax-ups (n=30), student wax-ups (n=15), and dentoform teeth of tooth #14 (n=15) were used for this study. Two additional faculty members visually graded the samples according to a checklist and then repeated the grading after one week. The sample wax-up with the highest score based on the visual grading was selected as the master model for the digital grading, which was also performed twice with an interim period of one week. Descriptive statistics and signed rank tests for systematic bias were used for intra- and interrater comparisons. The intraclass correlation (ICC) was used as a measure of intra- and interrater reliability. None of the faculty members achieved the minimum acceptable intrarater agreement of 0.8. Interrater agreement was substantially less than intrarater agreement for the visual grading, whereas all measures of intrarater agreement were greater than 0.9 and considered excellent for the digital assessment technique. These results suggest that visual grading is limited by modest intrarater reliability and low interrater agreement. Digital grading is a promising evaluation method showing excellent intrarater reliability and correlation. Correlation for visual and digital grading was consistently modest, partly supporting the potential use of digital technology in dental anatomy grading.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call