Abstract
BackgroundEconomists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA. The construction of these measures, employing plausible value (PV) methodology, is quite different from that of the more familiar test scores associated with assessments such as the SAT or ACT. These differences have important implications both for utilization and interpretation. Although much has been written about PVs, it appears that there are still misconceptions about whether and how to employ them in secondary analyses.MethodsWe address a range of technical issues, including those raised in a recent article that was written to inform economists using these databases. First, an extensive review of the relevant literature was conducted, with particular attention to key publications that describe the derivation and psychometric characteristics of such achievement measures. Second, a simulation study was carried out to compare the statistical properties of estimates based on the use of PVs with those based on other, commonly used methods.ResultsIt is shown, through both theoretical analysis and simulation, that under fairly general conditions appropriate use of PV yields approximately unbiased estimates of model parameters in regression analyses of large scale survey data. The superiority of the PV methodology is particularly evident when measures of student achievement are employed as explanatory variables.ConclusionsThe PV methodology used to report student test performance in large scale surveys remains the state-of-the-art for secondary analyses of these databases.
Highlights
Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as National Assessment of Educational Progress (NAEP), Trends in International Math and Science Studies (TIMSS), and Programme for International Student Assessment (PISA)
The latent regression modeling and the associated imputation methodology have been the focus of a large number of publications showing that these methods produce unbiased population estimates (e.g. Mislevy et al 1992; von Davier 2007; von Davier and Mislevy 2009: Marsman et al 2016), the recent article by Jacob and Rothstein (2016) [ JR] questions the increasing use by economists of the test scores so generated as credible measures of human capital
The article’s broad coverage is, in our view, both welcome and somewhat problematic: The issues arising with conventionally designed standardized tests are different from those that arise in the analysis of data from large-scale assessment surveys (LSAS) such as the National Assessment of Educational Progress (NAEP), Programme for International Student Assessment (PISA), Trends in International Math and Science Studies (TIMSS), Progress in International Reading Literacy Study (PIRLS), and Programme in the International Assessment of Adult Competencies (PIAAC)
Summary
Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have