Abstract

The aim of this study was to find ways to improve reliability of cut-off scores that are typically used to make high-stake decisions in dental education by empirically comparing two different rating methods, Yes/No and Percentage methods. The two rating methods are commonly used when the Angoff's method is applied to determine a cut-off score that divides the examinees into minimally competent group (pass) and incompetent group (fail). The expert panel data were collected using both methods from 11 to 13 panel members in two consecutive years, respectively; The data were analysed within the generalisability theory framework to quantify relative influences of each factor (eg panel, item, rating rounds) on the variability of cut-off scores, standard error of measurement and panel agreement. The results suggest that (a) the two methods can make a substantial difference in overall success rates for college senior students, (b) item-related variance components are generally large and whilst rater-related variance components are small, (c) standard errors of measurement for the cut-off scores decreased from Cohort 1 to Cohort 2 as the number of items are increased and as the expert panel members are more trained and (d) the Percentage method yielded higher agreement amongst the panel in both years. The results provide practical guidelines for dental educators who make efforts to control the quality of final competency exams and cut-off scores with respect to standard setting practices and panel data analysis. It can be concluded that evaluations with Percentage method results in more reliable outcomes compared to those with Yes/No method when criterion-referenced assessment is applied to determine the cut-off scores of competency tests at schools.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call