Abstract

To develop a reliable instrument to objectively assess feedback quality, to use it for assessment of the quality of students' narrative feedback and to be used as a self-assessment instrument for students in their learning process. In a retrospective cohort study, 635 feedback narratives, provided by small groups of Medicine and Biomedical Sciences undergraduate students, have been extracted from available quarterly curriculum evaluation surveys. A rubric was developed based on literature and contents of our feedback education. It consists of seven subitems and has a maximum score of 20 points (sufficient score: >10 points). Rubric reliability was evaluated using intra-class correlation. The rubric was tested by analysing the feedback narratives. To test progression, we compared rubric scores between study years with a Kruskal-Wallis analysis and Dunn's post-hoc testing with Bonferroni correction. The rubric has an intra-class correlation of 0.894. First year students had a mean rubric score of 11.5 points (SD 3.6), second year students 12.4 (SD 3.4) and third year students 13.1 (SD 3.6). Kruskal-Wallis testing showed significant differences in feedback quality between study years (χ2(2, N=635) = 17.53, p<0.001). Dunn's post-hoc test revealed significant differences between study years one and two (p=0.012) and one and three (p<0.001). The developed rubric is a reliable instrument to assess narrative feedback quality. Students were able to provide feedback of sufficient quality and quality improved across study years. The instrument will allow students to assess themselves and learn where there is still room for improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call