Abstract

BackgroundMedical faculty’s teaching performance is often measured using residents’ feedback, collected by questionnaires. Researchers extensively studied the psychometric qualities of resulting ratings. However, these studies rarely consider the number of response categories and its consequences for residents’ ratings of faculty’s teaching performance. We compared the variability of residents’ ratings measured by five- and seven-point response scales.MethodsThis retrospective study used teaching performance data from Dutch anaesthesiology residency training programs. Questionnaires with five- and seven-point response scales from the extensively studied System for Evaluation of Teaching Qualities (SETQ) collected the ratings. We inspected ratings’ variability by comparing standard deviations, interquartile ranges, and frequency (percentage) distributions. Relevant statistical tests were used to test differences in frequency distributions and teaching performance scores.ResultsWe examined 3379 residents’ ratings and 480 aggregated faculty scores. Residents used the additional response categories provided by the seven-point scale – especially those differentiating between positive performances. Residents’ ratings and aggregated faculty scores showed a more even distribution on the seven-point scale compared to the five-point scale. Also, the seven-point scale showed a smaller ceiling effect. After rescaling, the mean scores and (most) standard deviations of ratings from both scales were comparable.ConclusionsRatings from the seven-point scale were more evenly distributed and could potentially yield more nuanced, specific and user-friendly feedback. Still, both scales measured (almost) similar teaching performance outcomes. In teaching performance practice, residents and faculty members should discuss whether response scales fit their preferences and goals.

Highlights

  • Medical faculty’s teaching performance is often measured using residents’ feedback, collected by questionnaires

  • We compared the variability of residents’ ratings of faculty’s teaching performance using five- and sevenpoint response scales. We examined whether both scales resulted in similar teaching performance outcomes

  • Analysis Residents’ ratings containing more than 50% missing values were excluded from our dataset, remaining missing values were imputed using expectation maximization (EM)

Read more

Summary

Introduction

Medical faculty’s teaching performance is often measured using residents’ feedback, collected by questionnaires. Researchers extensively studied the psychometric qualities of resulting ratings. These studies rarely consider the number of response categories and its consequences for residents’ ratings of faculty’s teaching performance. To gain insight into the strengths and weaknesses of faculty’s teaching performance, feedback from residents is collected using questionnaires [1, 2, 4, 5]. It is crucial that questionnaires measuring faculty’s teaching performance are valid, reliable, and fit its practical use. The number of response categories might affect residents’ ratings of faculty’s performance (e.g. means, frequency distributions) [12,13,14,15,16,17]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call