Abstract

Problem-based learning (PBL) is widely used in medical education. In some cases, facilitators assign a grade to reflect a student's performance in small-group sessions. In our PBL track, facilitators were asked to assess student knowledge base independent of their group participatory skills. To determine if facilitators' grades were correlated with student performance in written exams, a retrospective study of data from our PBL track was undertaken. Data from 156 students and 107 facilitators in six years of a PBL track at Penn State College of Medicine was analyzed by the Pearson correlation after pairing facilitator grades with written exam grades for each of the eight blocks of the curriculum. Exam reliability and validity were assessed by Cronbach's alpha and correlation with USMLE I board scores. The mean alpha was 0.549 +/- 0.221. The mean correlation with USMLE scores was 0.558 +/- 0.151. Facilitators' scores for knowledge were positively associated with students' exam grades. The corresponding significant Pearson correlation coefficients were between 0.342-0.622. However, the coefficients of determination showed that the correlation was not significant. Coefficients of determination showed that the knowledge scores explained only 12 to 39% of the variance in exam scores. Overestimation by facilitators was significantly (p < 0.0001) greater for students in the bottom 25% of the class by exam score than for students in the top 25% of the class. On the basis of this study, we concluded that facilitator assessment of student knowledge base is not useful.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call