Abstract
To answer the calls for stronger evidence by the policy community, educational researchers and their associated organizations increasingly demand more studies that can yield causal inferences. International large scale assessments (ILSAs) have been targeted as a rich data sources for causal research. It is in this context that we take up a discussion around causal inferences and ILSAs. Although these rich, carefully developed studies have much to offer in terms of understanding educational systems, we argue that the conditions for making strong causal inferences are rarely met. To develop our argument we first discuss, in general, the nature of causal inferences and then suggest and apply a validity framework to evaluate the tenability of claims made in two well-cited studies. The cited studies exemplify interesting design features and advances in methods of data analysis and certainly contribute to the knowledge base in educational research; however, methodological shortcomings, some of which are unavoidable even in the best of circumstances, urge a more cautious interpretation than that of strict “cause and effect.” We then discuss how findings from causal-focused research may not provide answers to the often broad questions posed by the policy community. We conclude with examples of the importance of the validity framework for the ILSA research community and a suggestion of what should be included in studies that wish to employ quasi-experimental methods with ILSA data.
Highlights
Policy makers often express a need for scientifically-based evidence to articulate policy and make funding decisions (e.g., Raudenbush 2008; Stevens 2011; Sutherland et al 2012)
Carefully developed studies have much to offer in terms of understanding educational systems, we argue that the conditions for making strong causal inferences are rarely met
Experimental and quasi-experimental designs play a key role in developing an understanding of important phenomena in educational research
Summary
Policy makers often express a need for scientifically-based evidence to articulate policy and make funding decisions (e.g., Raudenbush 2008; Stevens 2011; Sutherland et al 2012). This example was chosen because it is one of the first instances where researchers explored the Trends in International Mathematics and Science Study (TIMSS) data to design a study that uses matching groups to calculate gain scores and “sophisticated statistical techniques [that presumably] allows them to generate causal hypotheses concerning specific aspects of the curriculum on student learning”
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have