Abstract

Recent studies indicate that examiners make a number of intentional and unintentional errors when administering reading assessments to students. Because these errors introduce construct-irrelevant variance in scores, the fidelity of test administrations could influence the results of evaluation studies. To determine how assessment fidelity is being addressed in reading intervention research, we systematically reviewed 46 studies conducted with students in Grades K–8 identified as having a reading disability or at-risk for reading failure. Articles were coded for features such as the number and type of tests administered, experience and role of examiners, tester to student ratio, initial and follow-up training provided, monitoring procedures, testing environment, and scoring procedures. Findings suggest assessment integrity data are rarely reported. We discuss the results in a framework of potential threats to assessment fidelity and the implications of these threats for interpreting intervention study results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call