Abstract

Accurate and efficient universal screening is a foundational component of multi-tiered systems of support for reading. By the time students reach middle school, educators often have extant data available to inform screening decisions. Therefore, the decision to collect additional data to inform screening should be considered carefully. The classification accuracy of aimswebPlus reading, a newly updated version of a popular suite of screening tools, has not been independently examined in middle school since its release. We used districtwide data from a midsize city in Texas to retrospectively examine the classification accuracy of aimswebPlus reading composite scores from the fall and winter benchmarking periods. The criterion measure was the annual statewide reading test administered in spring. To provide a comparison for the aimswebPlus results, we also evaluated the accuracy of screening decisions made based on prior year statewide reading test scores. Decisions made based on the aimswebPlus “default” cut-scores resulted in unacceptable sensitivity for universal screening. Following the aimswebPlus recommended method to establish local cut-scores improved the sensitivity of decisions in each grade and benchmarking season but the sensitivity values still fell below recommendations for minimally acceptable sensitivity. In comparison, decisions made based on prior year state test scores demonstrated adequate sensitivity and specificity in Grades 7 and 8. Directions for future research and recommendations for practice are discussed within the context of study limitations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call