Abstract

ABSTRACTThe objective of the present study was to evaluate the extent to which students who took a computer adaptive test of reading comprehension accounting for testlet effects were administered fewer passages and had a more precise estimate of their reading comprehension ability compared to students in the control condition. A randomized controlled trial was used whereby 529 students in Grades 4–8 and 10 were randomly assigned to one of two conditions, both of whom took a computerized adaptive assessment of reading comprehension. Participants in the experimental condition had ability scores estimated as a function of an item response model, which accounted for item-dependence effects in the reading assessment, whereas control students took a version where item-dependence effects were not controlled. Results indicated that examinees in the experimental condition took fewer passages (average Hedges' g = 0.97) and had more reliable estimates of their reading comprehension ability (average Hedges' g = 0.60). Findings are discussed in the context of potential time savings in assessment practices without sacrificing reliability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.