Abstract

Value-added (VA) models measure the productivity of agents such as teachers or doctors based on the outcomes they produce. The utility of VA models for performance evaluation depends on the extent to which VA estimates are biased by selection, for instance by differences in the abilities of students assigned to teachers. One widely used approach for evaluating bias in VA is to test for balance in lagged values of the outcome, based on the intuition that today’s inputs cannot influence yesterday’s outcomes. We use Monte Carlo simulations to show that, unlike in conventional treatment effect analyses, tests for balance using lagged outcomes do not provide robust information about the degree of bias in value-added models for two reasons. First, the treatment itself (value-added) is estimated, rather than exogenously observed. As a result, correlated shocks to outcomes can induce correlations between current VA estimates and lagged outcomes that are sensitive to model specification. Second, in most VA applications, estimation error does not vanish asymptotically because sample sizes per teacher (or principal, manager, etc.) remain small, making balance tests sensitive to the specification of the error structure even in large datasets. We conclude that bias in VA models is better evaluated using techniques that are less sensitive to model specification, such as randomized experiments, rather than using lagged outcomes.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call