Abstract

BackgroundProgrammatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. The aim of this study is to describe and evaluate a programmatic system to handle student assessment results that is aligned not only with learning and remediation, but also with defensibility. The key components are standards based assessments, use of "Conditional Pass", and regular progress meetings.MethodsThe new assessment system is described. The evaluation is based on years 4-6 of a 6-year medical course. The types of concerns staff had about students were clustered into themes alongside any interventions and outcomes for the students concerned. The likelihoods of passing the year according to type of problem were compared before and after phasing in of the new assessment system.ResultsThe new system was phased in over four years. In the fourth year of implementation 701 students had 3539 assessment results, of which 4.1% were Conditional Pass. More in-depth analysis for 1516 results available from 447 students revealed the odds ratio (95% confidence intervals) for failure was highest for students with problems identified in more than one part of the course (18.8 (7.7-46.2) p < 0.0001) or with problems with professionalism (17.2 (9.1-33.3) p < 0.0001). The odds ratio for failure was lowest for problems with assignments (0.7 (0.1-5.2) NS). Compared with the previous system, more students failed the year under the new system on the basis of performance during the year (20 or 4.5% compared with four or 1.1% under the previous system (p < 0.01)).ConclusionsThe new system detects more students in difficulty and has resulted in less "failure to fail". The requirement to state conditions required to pass has contributed to a paper trail that should improve defensibility. Most importantly it has helped detect and act on some of the more difficult areas to assess such as professionalism.

Highlights

  • Programmatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone

  • We found the use of the term “borderline” created ambiguity for both staff and students, resulting in staff using the term in a range of situations: when the decision was difficult, if there was a paucity of data, or if there was uncertainty about the validity of the assessments

  • A failure to achieve the standards in any of these summative assessments leads to a conditional pass (CP) for that module

Read more

Summary

Introduction

Programmatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. Recognition that many assessment tools were unreliable resulted in a quest for, and changes to, more reliable ones From such moves arose a threat to validity as the drive for objectivity was information to inform defensible decisions. Added to this is the complexity of so called “sub threshold” concerns where a candidate may cause some concern on a number of assessments but none, on its own, is sufficient to trigger action [3]. Taken in their entirety such assessment results suggest a pattern of performance that should be acted on. There is a need for research into ways to improve the quality of assessment systems, not just assessment tools [4]

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.