Abstract

Within assessment of physical examination skills, two approaches are common: "Describing Findings" (students comment throughout); and examining as "Usual Practice" (students only report findings at the end). Despite numerous potential influences on both students' performances and assessors' judgements, no prior studies have investigated the influence of either approach on assessments. Two group, randomised, crossover design. Within a 2-station simulated physical examination OSCE, we manipulated whether students "described findings" or examined as "usual practice", collecting 1/. performance scores; 2/. Students'/examiners' cognitive load ratings; ratings of the 3/. fluency and 4/. completeness of students' presentations and 5/. Students' task-finishing, comparing all 5 end-points across conditions. Neither students' performance scores nor examiners' cognitive load were influenced by experimental condition. Students reported higher cognitive load (7/9) when "describing findings" than "usual practice" (6/9, p=0.002), and were less likely to finish (4 vs 12, p=0.007). Presentation completeness was higher for "describing findings" (mean=2.40, (95CIs=2.05-2.74)) than "usual practice" (mean=1.92 (1.65-2.18),p=0.016), whilst fluency ratings showed a similar trend. The decision to "Describe Findings" or examine as "Usual Practice" does not appear neutral, potentially influencing students' efficiency, recall and (by inference) learning. Institutions should explicitly select one option based on assessment goals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call