Abstract

Abstract Little is known regarding the underlying constructs of writing tests used by U.S. state education authorities and national governments to evaluate the writing performance of their students, especially in middle school grades. Through a content analysis of 78 prompts and 35 rubrics from 27 states’ middle school writing assessments from 2001 to 2007, and three representative prompts and rubrics from the United States’ 2007 National Assessment of Educational Progress (NAEP) writing test, this study illuminates the writing constructs underlying large-scale writing assessments through examination of features in prompts and rubrics and investigation of the connections between prompts and rubrics in terms of genre demands. We found the content of state writing assessments and the NAEP align with respect to measurement parameters associated with (a) emphasis on writing process, audience awareness, and topic knowledge, (b) availability of procedural facilitators (e.g., checklists, rubrics, dictionaries) to assist students in their writing, and (c) inclusion of assessment criteria focused on organization, structure, content, details, sentence fluency, semantics, and general conventions. However, the NAEP’s writing assessment differs from many state tests of writing by including explicit directions for students to review their writing, giving students two timed writing tasks rather than one, making informational text production one of the three genres assessed, and including genre-specific evaluative components in rubrics. This study contributes to our understanding of the direction and path that large-scale writing assessments in the US are taking and how writing assessments are continually evolving.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call