Abstract

Virtual patient cases (VPs) are used for healthcare education and assessment. Most VP systems track user interactions to be used for assessment. Few studies have investigated how virtual exam cases should be scored and graded. We have applied eight different scoring models on a data set from 154 students. Issues studied included the impact of penalizing guessing, requiring a correct diagnose, different grading levels, and the effect of using weighted diagnose metrics. Controlling the random-guessing approach is necessary and can be accomplished by a rubric that measures a relative efficiency of the learner's inquiries and the total number of inquiries. Using a straight percentage score versus a curved exam score had a major impact on grades. Significant differences were found when using different metrics as only one of the eight rubric models resulted in a Gaussian distribution. Course directors need to analyze expected learning outcomes from a course to determine a scoring metric to assess those particular needs; the grading rubric must also control for guessing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call