Abstract

This study investigated the dependability of reading comprehension scores across different text genres and response formats for readers with varied language knowledge. Participants included 78 fourth-graders in an urban elementary school. A randomized and counterbalanced 3 × 2 study design investigated three response formats (open-ended, multiple-choice, retell) and two text genres (narrative, expository) from the Qualitative Reading Inventory (QRI-5) reading comprehension test. Standardized language knowledge measures from the Woodcock Johnson III Tests of Achievement (Academic Knowledge, Oral Comprehension, Picture Vocabulary) defined three reader profiles: (a) < 90 as emerging, (b) 90-100 as basic, and (c) >100 as proficient. Generalizability studies partitioned variance in scores for reader, text genre, and response format for all three groups. Response format accounted for 42.8 to 62.4% of variance in reading comprehension scores across groups, whereas text genre accounted for very little variance (1.2-4.1%). Single scores were well below a 0.80 dependability threshold (absolute phi coefficients = 0.06-0.14). Decision studies projecting dependability achieved with additional scores varied by response format for each language knowledge group, with very low projected dependability on open-ended and multiple-choice scores for readers with basic language knowledge. Multiple-choice scores had similarly low projected dependability levels for readers with emerging language knowledge. Findings evidence interactions between reader language knowledge and response format in reading comprehension assessment practices. Implications underscore the limitations of using a single score to classify readers with and without proficiency in foundational skills.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call