Abstract

These studies examined the contribution of text, activity, and reader to variance in reading comprehension test scores. Study 1 focused on multiple-choice and open-ended item responses, whereas Study 2 examined retell. Both studies included 79 fourth-grade students (age M = 9.72; SD = .34). Each student read six passages from the Qualitative Reading Inventory-Fifth Edition (QRI-5) and completed comprehension assessments of varying response format (open-ended questions, multiple choice, and retell). Measures of cognitive capacity, language knowledge, learning motivation, and word reading fluency were also administered. In Study 1, item-response crossed random effects models revealed statistically significant differences between open-ended question and multiple-choice response formats, and three covariates significantly predicted reading comprehension test scores: (a) attentive behavior, (b) language knowledge, and (c) working memory. Further exploratory analyses identified two-way interactions: (a) Response Format × Attentive Behavior, and (b) Response Format × Language Knowledge. In Study 2, crossed random effects models revealed two statistically significant predictors of retell scores: (a) text genre, and (b) language knowledge. Findings suggest different response format activities may contribute to variance in reading comprehension tests scores, and this test property may further interact with text as well as reader abilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call