Abstract

Objective Structured Clinical Examinations (OSCEs) have been used globally in evaluating clinical competence in the education of health professionals. Despite the objective intent of OSCEs, scoring methods used by examiners have been a potential source of measurement error affecting the precision with which test scores are determined. In this study, we investigated the differences in the inter-rater reliabilities of objective checklist and subjective global rating scores of examiners (who were exposed to an online training program to standardise scoring techniques) across two medical schools. Examiners’ perceptions of the e-scoring program were also investigated. Two Australian universities shared three OSCE stations in their end-of-year undergraduate medical OSCEs. The scenarios were video-taped and used for on-line examiner training prior to actual exams. Examiner ratings of performance at both sites were analysed using generalisability theory. A single facet, all random persons by raters design [PxR] was used to measure inter-rater reliability for each station, separate for checklist scores and global ratings. The resulting variance components were pooled across stations and examination sites. Decision studies were used to measure reliability estimates. There was no significant mean score difference between examination sites. Variation in examinee ability accounted for 68.3% of the total variance in checklist scores and 90.2% in global ratings. Rater contribution was 1.4% & 0% of the total variance in checklist score and global rating respectively, reflecting high inter-rater reliability of the scores provided by co-examiners across the two schools. Score variance due to interaction and residual error was larger for checklist scores (30.3% vs 9.7%) than for global ratings. Reproducibility coefficients for global ratings were higher than for checklist scores. Survey results showed that the e-scoring package facilitated consensus on scoring techniques. This approach to examiner training also allowed examiners to calibrate the OSCEs in their own time. This study revealed that inter-rater reliability was higher for global ratings than for checklist scores, thus providing further evidence for the reliability of subjective examiner ratings.

Highlights

  • The Objective Structured Clinical Examination (OSCE) is recognised by medical educators as an opportunity to evaluate essential clinical skills and competencies necessary for progression in the medical course (Harden & Gleeson, 1979; Hodges, 2003; Newble, 2004)

  • Pooled score variance attributed to student ability was higher on global ratings in comparison to checklist scores

  • A growing body of literature has reported that global ratings have higher reliability than checklist scores and are better able to discriminate between examinees (Hodges et al, 1999; Govaerts et al, 2002; Hodges et al, 2003; Wilkinson et al, 2003)

Read more

Summary

Introduction

The Objective Structured Clinical Examination (OSCE) is recognised by medical educators as an opportunity to evaluate essential clinical skills and competencies necessary for progression in the medical course (Harden & Gleeson, 1979; Hodges, 2003; Newble, 2004). Its widespread use to surmount many of the inherent validity problems of oral clinical examinations is due to its desirable characteristics of objective testing in which examinees are exposed to the same test conditions (Harden et al, 1975; Kirby & Curry, 1982; Downing & Yudkowsky, 2009). The OSCE format comprises a student rotating through a series of time limited clinical “stations”.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call