Context and setting The use of portfolios, as a learning and assessment tool, has gained considerable popularity in the past decade. We adopted portfolios as part of our assessment package in the clinical clerkships (years 4–6) of our extensively revised MBChB programme in 2002. Why the idea was necessary A major limitation of portfolios is the resource-intensive nature of the assessment process. Published data report examination times exceeding 90 minutes per candidate. In resource-constrained environments, typical of developing countries, this time requirement is prohibitive. What was done During the fourth-year medicine clerkship, students wrote up 25 patient encounters during a 14-week clerkship and reflected on these by: (a) editing (in another colour ink) their clerking notes after discussion with senior clinicians and review of standard reference texts, and (b) formulating written question and answer tasks regarding some aspect of individual patient encounters. The latter were dictated largely by individual student learning needs with guidance by clinician tutors where necessary. At the end of the year portfolios were examined by interview. Four patient encounters were discussed during a 30-minute single-examiner interview; candidates selected 1 case and examiners the remaining 3 cases. Using 5 structured questions, candidates’ ability to synthesise clinical assessments, using information gathered during patient encounters, was explored. The questions determined whether candidates had (a) defined clearly the patient’s presenting problem; (b) offered a pathophysiologically plausible diagnosis; (c) substantiated the diagnosis using available clinical and investigatory findings; (d) considered a reasonable differential diagnosis; and (e) formulated a reasonable management plan. Responses to questions were rated using a 9-point global rating scale: poor (1–3), adequate (4– 6) and good (7–9). Clinician examiners assigned a final percentage score to each case using a criterionreferenced scale: 3 or more questions rated ‘poor’ scored 45% or less, 2 or fewer questions rated ‘poor’ scored 52–58%, all questions rated ‘adequate’ scored 60–62%, 2 or fewer questions rated ‘good’ scored 65– 68%, at least 3 questions rated ‘good’ scored 70–74% and all questions rated ‘good’ score 75% or more. Clinician examiners were trained before implementing this assessment tool. Pearson’s correlation coefficient, Cronbach’s alpha coefficient and the standard error of measurement (SEM) of the assessment tool were determined using STATISTICA (Statsoft). Evaluation of results and impact The results of the fourth-year class of 2004 (n 1⁄4 181) were evaluated. The mean (± SD, 95% CI) portfolio interview score achieved was 67.5% (± 10.5, 66–69.1). The correlation coefficients for the portfolio interview, when compared to the multiple choice written examination and clinical case-based examination, were, respectively, r 1⁄4 0.42, and r 1⁄4 0.37. Cronbach’s alpha coefficient for the portfolio interview was 0.88 with an inter-item correlation of 0.66. The SEM was 3.6. The single-examiner portfolio interview, using standardised questions and a global rating scale, demonstrates high internal consistency and broadens the spectrum of assessment currently achieved. This method, requiring less examination time per candidate than published data, has been added to the array of assessment tools used in the clinical clerkship rotations in our new programme.
Read full abstract