Abstract

This research examined component processes that contribute to performance on one of the new, standards-based reading tests that have become a staple in many states. Participants were 60 Grade 4 students randomly sampled from 7 classrooms in a rural school district. The particular test we studied employed a mixture of traditional (multiple-choice) and performance assessment approaches (constructed-response items that required written responses). Our findings indicated that multiple-choice and constructed-response items enlisted different cognitive skills. Writing ability emerged as an important source of individual differences in explaining overall reading ability, but its influence was limited to performance on constructed-response items. After controlling for word identification and listening, writing ability accounted for no variance in multiple-choice reading scores. By contrast, writing ability accounted for unique variance in reading ability, even after controlling for word identification and listening skill, and explained more variance in constructed-response reading scores than did either word identification or listening skill. In addition, performance on the multiple-choice reading measure along with writing ability accounted for nearly all of the reliable variance in performance on the constructed-response reading measure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call