Abstract

As a result of the COVID-19 pandemic, medical schools across the world have been faced with the unprecedented situation of closing their doors and cancelling clinical teaching and examinations. The traditional model of assessment relying on high-stakes clinical examination(s) is no longer possible, nor fully fit for purpose. How can medical schools still demonstrate that their students have reached the threshold for safe practice and are able to graduate? A potential solution for this issue is programmatic assessment. Programmatic assessment involves the longitudinal collection of information about a student based on multiple low-stakes assessments.1 The purpose of each individual assessment is to provide feedback, not to pass or fail a learner. By assembling feedback from many assessments, the learner monitors their progress and discusses this with a mentor. High-stakes decisions are based on all of the data and are performed by an independent competence committee. Programmatic assessment moves away from an end-of-course ‘big bang’ examination to a continuous approach of assessment that drives learning in a meaningful and self-directed way. Programmatic assessment moves away from an end-of-course ‘big bang’ examination to a continuous approach of assessment that drives learning … During the COVID-19 pandemic, medical schools have used a variety of workplace-based assessments of clinical and professional skills, as well as applied knowledge tests to collect information on learning and performance. We propose that programmatic assessment could be used in a hybrid model to complement more conventional methods of assessment to make high-stakes decisions. … programmatic assessment could be used in a hybrid model to complement more conventional methods of assessment … Programmatic assessment has been gaining traction for many years, but its implementation is often hindered by challenges around nationwide ranking, university regulations, reliability and a resistance to change from the traditional model of high-stakes, summative examinations. With the cancellation of clinical examinations and restricted student access to clinical sites, medical schools have been forced to consider alternative ways to evidence their students’ progression. There is a strong evidence base for the benefits of programmatic assessment, including studies demonstrating the ability to facilitate learning and maximise the robustness of high-stakes decisions,2 as well as identify students at risk of poor academic progress, and thereby optimise timely interventions.3 The sole use of high-stakes, summative examinations has been described as intrinsically flawed because the ‘ideal’ assessment, capable of assessing all of the necessary competencies, does not exist, and placing an emphasis on single high-stakes examinations may promote poor learning styles.2 Although concerns have been raised regarding the reliability of ‘subjective’ single assessment points, it has been shown that acceptable reliability is achievable through large numbers of assessment points, varying methods of assessment (including both standardised and non-standardised assessment) and multiple assessors.4 Although this is a challenging time for medical education, the COVID-19 crisis may in fact present an opportunity for reflection and adaptation. Here are a few considerations regarding programmatic assessment during the COVID-19 pandemic. Having assessment data over time may help to ease some of our angst about learners’ assessment decisions during such unprecedented times. With the dissemination of this message, medical schools may overcome their apprehension regarding programmatic assessment and recognise its many benefits. In the face of adversity, we have stumbled upon a unique opportunity to enrich students’ learning, and in the words of Winston Churchill, we should ‘never let a good crisis go to waste’.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call