Abstract

BackgroundBristol Medical School has adopted a near peer-led teaching approach to deliver Basic Life Support training to first year undergraduate medical students. Challenges arose when trying to identify early in the course which candidates were struggling with their learning, in sessions delivered to large cohorts. We developed and piloted a novel, online performance scoring system to better track and highlight candidate progress.MethodsDuring this pilot, a 10-point scale was used to evaluate candidate performance at six time-points during their training. The scores were collated and entered on an anonymised secure spreadsheet, which was conditionally formatted to provide a visual representation of the score. A One-Way ANOVA was performed on the scores and trends analysed during each course to review candidate trajectory. Descriptive statistics were assessed. Values are presented as mean scores with standard deviation (x̄±SD).ResultsA significant linear trend was demonstrated (P < 0.001) for the progression of candidates over the course. The average session score increased from 4.61 ± 1.78 at the start to 7.92 ± 1.22 at the end of the final session. A threshold of less than 1SD below the mean was used to identify struggling candidates at any of the six given timepoints. This threshold enabled efficient highlighting of struggling candidates in real time.ConclusionsAlthough the system will be subject to further validation, our pilot has shown the use of a simple 10-point scoring system in combination with a visual representation of performance helps to identify struggling candidates earlier across large cohorts of students undertaking skills training such as Basic Life Support. This early identification enables effective and efficient remedial support.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call