Abstract

While self assessment continues to be touted as being of paramount importance for continuing professional competence, problem-based learning curricula, and adult learning theory, techniques for ensuring valid judgments have proven elusive. This study tested the applicability of an innovative relative-ranking procedure to problem-based learning tutorials. A total of 36 students in the McMaster University Faculty of Health Sciences' MD program were provided relative-ranking forms listing seven domains of competence along with their definitions. The student, two of the student's peers, and the student's tutor were asked to complete the ranking exercise after their second, fourth, and sixth tutorials. Combining each level of the time and rater variables generated 66 correlation coefficients, none of which was significantly different from zero. Re-performing the analysis on only the extreme domains did not improve this result. The relative-ranking instrument developed did not prove to be a reliable measure of tutorial performance. Ratings were inconsistent from one week to the next as well as across raters within a week.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.