Abstract
Grades in clinical clerkships are typically based on a combination of clinical assessments from teachers, as well as results of more reliable (but perhaps less valid) scores on standardised tests of knowledge. It is not clear how these scores are combined in practice to yield a final summative grade. Our subjects were 83 students who rotated through five clinical clerkships during a single year. After computing univariate correlations between clinical assessment scores and standardised examination scores, we performed logistic regression analyses for each clerkship to predict the final grade from these two variables. We compared actual grade with predicted grade under various hypothetical policies for combining these two variables. Finally, we assessed whether some students would systematically benefit from these policies. Clerkships varied in their univariate correlations between scores on clinical assessments and scores on standardised examinations. Clerkships with the lowest correlations tended to give more weight to standardised examination scores. Grading committees adjusted a substantial minority of grades to account for factors that were not reflected in either score. There did not appear to be a systematic bias in grading committee effect across the five clerkships. These results suggest a number of testable hypotheses about the cognitive processes by which evaluators combine various pieces of information to yield a summative performance score.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.