Abstract

Medical students' final clinical grades in internal medicine are based on the results of multiple assessments that reflect not only the students' knowledge, but also their skills and attitudes. To examine the sources of validity evidence for internal medicine final assessment results comprising scores from 3 evaluations and 2 examinations. The final assessment scores of 8 cohorts of Year 4 medical students in a 6-year undergraduate programme were analysed. The final assessment scores consisted of scores in ward evaluations (WEs), preceptor evaluations (PREs), outpatient clinic evaluations (OPCs), general knowledge and problem-solving multiple-choice questions (MCQs), and objective structured clinical examinations (OSCEs). Sources of validity evidence examined were content, response process, internal structure, relationship to other variables, and consequences. The median generalisability coefficient of the OSCEs was 0.62. The internal consistency reliability of the MCQs was 0.84. Scores for OSCEs correlated well with WE, PRE and MCQ scores with observed (disattenuated) correlation of 0.36 (0.77), 0.33 (0.71) and 0.48 (0.69), respectively. Scores for WEs and PREs correlated better with OSCE than MCQ scores. Sources of validity evidence including content, response process, internal structure and relationship to other variables were shown for most components. There is sufficient validity evidence to support the utilisation of various types of assessment scores for final clinical grades at the end of an internal medicine rotation. Validity evidence should be examined for any final student evaluation system in order to establish the meaningfulness of the student assessment scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call