Abstract

OSCEs (Objective Structured Clinical Examinations) are widely used in health professions to assess clinical skills competence. Raters use standardized binary checklists (CL) or multi-dimensional global rating scales (GRS) to score candidates performing specific tasks. This study assessed the reliability of CL and GRS scores in the assessment of veterinary students, and is the first study to demonstrate the reliability of GRS within veterinary medical education. Twelve raters from two different schools (6 from University of Calgary [UCVM] and 6 from Royal (Dick) School of Veterinary Studies [R(D)SVS] were asked to score 12 students (6 from each school). All raters assessed all students (video recordings) during 4 OSCE stations (bovine haltering, gowning and gloving, equine bandaging and skin suturing). Raters scored students using a CL, followed by the GRS. Novice raters (6 R(D)SVS) were assessed independently of expert raters (6 UCVM). Generalizability theory (G theory), analysis of variance (ANOVA) and t-tests were used to determine the reliability of rater scores, assess any between school differences (by student, by rater), and determine if there were differences between CL and GRS scores. There was no significant difference in rater performance with use of the CL or the GRS. Scores from the CL were significantly higher than scores from the GRS. The reliability of checklist scores were .42 and .76 for novice and expert raters respectively. The reliability of the global rating scale scores were .7 and .86 for novice and expert raters respectively. A decision study (D-study) showed that once trained using CL, GRS could be utilized to reliably score clinical skills in veterinary medicine with both novice and experienced raters.

Highlights

  • Educating veterinary students to become competent, autonomous practitioners requires ongoing assessment of students’ abilities and performance using methods that provide reliable and valid scores

  • The two–way Analysis of Variance (ANOVA) revealed a significant difference in student performance by school in both the CL (F(1,140) = 78.53, p

  • Total CL scores were significantly higher than total GRS scores (UCVM t (71) = 9.17, p

Read more

Summary

Introduction

Educating veterinary students to become competent, autonomous practitioners requires ongoing assessment of students’ abilities and performance using methods that provide reliable and valid scores. [2,4,5] Advantages of the OSCE over previously used methods of assessment of clinical skills include standardization of the tasks performed by all students, the ability to use trained non-subject matter experts as raters, and the reliability of judgments made between raters. [8, 10,15] To date, only one recent report has discussed the use of GRS for the assessment of clinical skills proficiency in veterinary students, these authors only addressed issues of pre and post score student satisfaction with the tool and did not demonstrate whether scores from the GRS tool were reliable or valid. While previous studies have compared CL and GRS scores, this study adds to the literature by providing a direct comparison of novice and expert raters and demonstrates that CLs can be used to inform the use of GRS with both types of raters

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call