Abstract
Background: Our study aimed to observe the differences in assessment results between virtual patient simulation (VPS) and regular course exams in an Internal Medicine course for undergraduate medical students.Methods: Four cohorts of students (n = 216) used: a VPS or lectures for learning (terms 1 and 2); VPS and lectures or only lectures (term 3); and a paired set-up with both VPS and lectures (term 4). The assessment results, measured with both a VPS-based exam and a paper-based exam, were compared. A scoring rubric (0–6), developed and validated for the purpose of the trial, was applied to both types of assessment. Mean score differences of the results were compared for the four cohorts.Results: Both VPS and regular examination results were significantly higher in the VPS group compared to regular exam group (p < 0.001) in terms 1, 2 and 3. The paired mean difference in term 4 was 0.66 (95% confidence interval (CI) 0.50, 0.83; p < 0.001) for haematology and 0.57 (95% CI 0.45, 0.69; p < 0.001) for cardiology.Conclusion: Our findings suggest that using VPS both for learning and for assessment supports learning. VPS are better than traditional assessment methods when the virtual application is used for both learning and evaluation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have