Abstract

BackgroundIn-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports.MethodsA random sample of ITERs submitted in a pharmacy program during 2013–2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015–2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 (“not at all”) and 5 (“exemplary”), with 3 categorized as “acceptable”.ResultsMean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively.ConclusionsThis study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.

Highlights

  • In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines

  • Herewithin, we report on the first experience using CCCER as a measure of ITER quality in pharmacy student experiential training and the effects of a professional development workshop

  • To determine ITER quality in our program, we evaluated a random sample of those completed in the 2013–2014 academic year using the Clinical Evaluation Report Rating (CCERR) scoring tool (Additional file 1)

Read more

Summary

Introduction

In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. Health professional students participate in workplacebased training as a fundamental aspect of their education. Referred to as field practicum, rotation, or clerkship evaluations, these forms are Despite programs’ reliance on an ITER as an account of trainees’ clerkship performance and collectively, as a reliable summative record of a student’s demonstrated skills, knowledge, and behaviours over time, rater variability pervades WBA and is typically considered undesirable [4]. Most ITERs outline the student competency components to guide users, studies demonstrate that clinical supervisors do not uniformly interpret. Global student impressions may shape specific domain scores indiscriminately; the mental workload required to process and score multiple dimensions further contributes to unconscious cognitive biases. [9, 10]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call