Abstract

The traditional approach to estimating the consistency of school effects across subject areas and the stability of school effects across time is to fit separate value-added multilevel models to each subject or cohort and to correlate the resulting empirical Bayes predictions. We show that this gives biased correlations and these biases cannot be avoided by simply correlating “unshruken” or “reflated” versions of these predicted random effects. In contrast, we show that fitting a joint value-added multilevel multivariate response model simultaneously to all subjects or cohorts directly gives unbiased estimates of the correlations of interest. There is no need to correlate the resulting empirical Bayes predictions and indeed we show that this should again be avoided as the resulting correlations are also biased. We illustrate our arguments with separate applications to measuring the consistency and stability of school effects in primary and secondary school settings. However, our arguments apply more generally to other areas of application where researchers routinely interpret correlations between predicted random effects rather than estimating and interpreting these correlation directly.

Highlights

  • There are many studies that investigate the effects of individual schools on student achievement using multilevel value-added analyses

  • While we focus on the most standard data designs for estimating the consistency and stability of school effects, we note that other designs are possible where, for example, students within each school study only one of the subjects and contribute only one achievement score each, or where a single cohort of students is tracked across multiple value-added periods and interest lies in correlating the school effects across these value-added periods

  • We have argued that the preferred approach to estimating the consistency and stability of school effects is to fit a joint model to the multiple subjects or cohorts under investigation and to estimate the consistency or stability correlations directly as a function of the model parameters

Read more

Summary

Introduction

There are many studies that investigate the effects of individual schools on student achievement using multilevel value-added analyses (see the handbooks by Teddlie & Reynolds, 2000, and Townsend, 2007, the recent review by Reynolds et al, 2014, and the 2004 special issue of the Journal of Educational and Behavioral Statistics devoted to value-added models, Wainer, 2004) At their simplest, these analyses use two-level students within-schools random-intercept models to regress student achievement at the end of the value-added period of interest on student achievement at the start of the period (Goldstein, 1997). We refer the reader to Guarino, Reckase, and Wooldridge (2015) for a recent discussion of these and other estimators

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call