Abstract

Calls for empirical investigations of the Common Core standards (CCSSs) for English Language Arts have been widespread, particularly in the area of text complexity in the primary grades (e.g., Hiebert & Mesmer Educational Research, 42(1), 44-51, 2013). The CCSSs mention that qualitative methods (such as Fountas and Pinnell) and quantitative methods (such as Lexiles) can be used to gauge text complexity (CCSS Initiative, 2010). However, researchers have questioned the validity of these tools for several decades (e.g., Hiebert & Pearson, 2010). In an effort to establish criterion validity of these tools, individual studies have compared how well they correlate with actual student reading performance measures, most commonly reading comprehension and/or oral-reading fluency (ORF). ORF is a key aspect of reading success and as such is often used for progress monitoring purposes. However, to date, studies have not been able to evaluate different text complexity tools and relation to reading outcomes across studies. This is challenging because the pair-wise meta-analytic model is not able to synthesize several independent variables that differ both within and across studies. Therefore, it is unable to answer pressing research questions in education, such as, which text complexity tool is most correlated with student ORF (and, thus, a good measure of text difficulty)? This question is timely given that the Common Core State Standards explicitly mention various text complexity tools; yet, the validity of such tools has been repeatedly questioned by researchers. This article provides preliminary evidence to answer that question using an approach borrowed from the field of medicine-Network Meta-Analysis (NMA; Lumley Statistics in Medicine, 21, 2313-2324, 2002). A systematic search yielded 5 studies using 19 different text complexity tools with ORF as the reading outcome measured. Both a frequentist and Bayesian NMA were conducted to pool the correlations of a given text complexity tool with students' ORF. While the results differed slightly across the two approaches, there is preliminary evidence in support of the hypothesis that text complexity tools which incorporate more fine-grained sub-lexical variables were more strongly correlated with student outcomes. While the results of this example cannot be generalized due to the low sample size, this article shows how NMA is a promising new analytic tool for synthesizing educational research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call