Abstract

AbstractAutomated text complexity measurement tools (also called readability metrics) have been proposed as a way to help teachers, textbook publishers, and assessment developers select texts that are closely aligned with the new, more demanding text complexity expectations specified in the Common Core State Standards. This article examines a critical element of the validity arguments presented in support of proposed metrics: the claim that criterion text complexity scores developed from students’ responses to reading comprehension test items are reflective of the difficulties actually experienced by students while reading. Evidence that fails to support this assertion is examined, and implications relative to the goal of obtaining valid, unbiased evidence about the measurement properties of proposed readability metrics are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call