Abstract

While portfolios are increasingly used to assess competence, the validity of such portfolio-based assessments has hitherto remained unconfirmed. The purpose of the present research is therefore to further our understanding of how assessors form judgments when interpreting the complex data included in a competency-based portfolio. Eighteen assessors appraised one of three competency-based mock portfolios while thinking aloud, before taking part in semi-structured interviews. A thematic analysis of the think-aloud protocols and interviews revealed that assessors reached judgments through a 3-phase cyclical cognitive process of acquiring, organizing, and integrating evidence. Upon conclusion of the first cycle, assessors reviewed the remaining portfolio evidence to look for confirming or disconfirming evidence. Assessors were inclined to stick to their initial judgments even when confronted with seemingly disconfirming evidence. Although assessors reached similar final (pass–fail) judgments of students’ professional competence, they differed in their information-processing approaches and the reasoning behind their judgments. Differences sprung from assessors’ divergent assessment beliefs, performance theories, and inferences about the student. Assessment beliefs refer to assessors’ opinions about what kind of evidence gives the most valuable and trustworthy information about the student’s competence, whereas assessors’ performance theories concern their conceptualizations of what constitutes professional competence and competent performance. Even when using the same pieces of information, assessors furthermore differed with respect to inferences about the student as a person as well as a (future) professional. Our findings support the notion that assessors’ reasoning in judgment and decision-making varies and is guided by their mental models of performance assessment, potentially impacting feedback and the credibility of decisions. Our findings also lend further credence to the assertion that portfolios should be judged by multiple assessors who should, moreover, thoroughly substantiate their judgments. Finally, it is suggested that portfolios be designed in such a way that they facilitate the selection of and navigation through the portfolio evidence.

Highlights

  • With the rise of competency-based assessment, portfolios are increasingly seen as the linchpin of assessment systems

  • Prior research has addressed the question of how assessors develop judgments, the latter concerned judgments based on direct observations (Kogan et al 2011) or single assessments, forgoing the opportunity to investigate how holistic judgments are formed on the basis of complex data collected in the student’s portfolio

  • The present study described the process whereby assessors reach judgments, when reviewing the evidence collated in a competency-based portfolio

Read more

Summary

Introduction

With the rise of competency-based assessment, portfolios are increasingly seen as the linchpin of assessment systems. Multiple medical schools have implemented competency-based assessment systems in which the portfolio is key to the assessment of students’ achievements (Dannefer and Henson 2007; Davis et al 2001; Driessen 2016; Smith et al 2003). In these portfolio-based assessment systems, decisions regarding the students’ level of competence typically rely on expert judgment. Prior research has addressed the question of how assessors develop judgments, the latter concerned judgments based on direct observations (Kogan et al 2011) or single assessments, forgoing the opportunity to investigate how holistic judgments are formed on the basis of complex data collected in the student’s portfolio

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call