Abstract

The aim of this study was to explore if competency-based progress tests for postgraduate psychiatry are reliable, if they are able to discriminate trainees at different levels of training, and if they are able to demonstrate improvement of trainees' skills from 3years of data. Psychiatry trainees in the North Western Deanery, UK, were invited to participate in the annual progress test. The progress test simulated the Clinical Assessment of Skills and Competencies (CASC) exam, the final postgraduate examination for psychiatry trainees. The sum of global scores from all stations for each candidate was used for statistical analysis. Cronbach's alpha was used to calculate the interstation reliability. Analysis of variance (ANOVA) was used to explore if the progress test could discriminate between the three levels of trainees each year. Student's t test was used to explore if there was improvement and development of skills as a cohort progressed; ANOVA was used for the cohort with 3years of data. The progress test is more likely to be reliable (alpha ≥ 0.8) when 12 stations are used. ANOVA revealed significantly improved scores with increasing level of seniority in 2012, with a mean total score increasing from 23.1 to 31.3 (p = 0.008) and 36.9 to 46.6 in 2013 (p = 0.004). In 2014, this effect was not observed, with a mean decreasing from 42.4 to 41.3. Over time, two cohorts demonstrated improving mean scores with Student's t tests from 26.4 to 32.4 (p = 0.01) and 26.9 to 42.6 (p = 0.01). The third cohort did not demonstrate a difference over time, with mean scores 23.1, 27.6, and 25.9 over 3years. It is not conclusive if the progress test can accurately distinguish between trainee seniority or assess progress over time; possible explanations for non-significant results and further avenues of enquiry are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call