IntroductionEnsuring examiner equivalence across distributed assessment locations is a priority within distributed Objective Structured Clinical Exams (OSCEs) but is challenging as examiners are typically fully nested within locations (i.e. no overlap in performances seen by different groups of examiners). Video-based Examiner Score Comparison and Adjustment (VESCA) is a recently developed method which uses video-based linking to compare and (potentially) adjust for the effect of different groups of examiners within OSCEs. Whilst initial research on VESCA has been promising, the accuracy of the resulting adjusted scores is unknown. Given this, we aimed to investigate the accuracy of adjusted scores produced by VESCA under a range of plausible operational parameters.MethodsUsing statistical simulation, we investigated how: 1/proportion of participating examiners, 2/ number of linking videos, 3/baseline differences in examiner stringency between schools (i.e. whether examiners in School A are, on average, more stringent than the examiners in School B), 4/number of OSCE stations and 5/different degrees of random error within examiners’ judgements influenced accuracy of adjusted scores.We generated distributions of students’ “true” performances across several stations, added examiner error, and simulated linking through crossed video-scoring (as occurs in VESCA). We then used Many Facet Rasch Modelling to produce an adjusted score for each student which we compared with their corresponding original “true” performance score. We replicated this 1000 times for each permutation to determine average error reduction and the proportion of students whose scores became more accurate. Simulation parameters were derived from a real, summative, whole curriculum undergraduate Year 3 OSCE at Keele University School of Medicine.ResultsWe found that in all conditions where no baseline difference existed between groups of examiners, score adjustment only minimally improved or even worsened score accuracy. Conversely, as the size of baseline differences between schools increased, adjustment accuracy increased, reducing error by up to 71% and making scores more accurate for up to 93% of students in the 20% baseline-difference condition.ConclusionsScore adjustment through VESCA has the potential to substantially enhance equivalence for candidates in distributed OSCEs in some circumstances, whilst making scores less accurate in others. These findings will support judgements about when score adjustment may beneficially aid OSCE equivalence.
Read full abstract