Abstract

Rater-mediated assessments, such as teacher behavior rating scales, measure student behavior indirectly through the lens of a rater. As a result, scores from rater-mediated assessments can be influenced by rater effects— individual differences in rater perspectives, attitudes, beliefs, and interpretation of rating scale items. Rater effects are a fundamental aspect of all rater-mediated assessments. However, traditional approaches to evaluate rater effects (i.e., classical test theory, generalizability theory, and multilevel modeling) merely estimate how much score variability is due to the rater. These approaches, while informative, do not offer a solution to the problem. In contrast, Many-facet Rasch measurement (MFRM) approaches estimate and control for rater effects in rater-mediated assessments so that scores are adjusted to account for rater variability. Thus, MFRM offers unique insights into individual- and group-level rater effects that can be used to inform a solution. The resultant purpose of this paper is to introduce MFRM, discuss its advantages for evaluating rater effects in rater-mediated assessments, and demonstrate its use through an applied example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call