Abstract

Curriculum-based measurement (CBM) is a standardized method of directly observing and scoring a student performing critical skills in a subject or grade level (Deno, 1985). CBM is also commonly referred to as a general outcome measure because the tasks sample an entire grade level curriculum and indicates when there is a problem or deficit (Deno & Mirkin, 1977). The tasks are intended to be quick and easy to score, inexpensive, and intended for use in comparing students within and across classrooms (Deno, 1985). For example, a common reading CBM is oral reading fluency, where a student reads a passage out loud at their instructional level and the number of correctly read words are counted. Originally, CBM was developed for use in special education classrooms for teachers to evaluate the effectiveness of their instruction with students with disabilities, but over time CBM has been used in the general education setting as well as within multi-tiered systems of support (MTSS) for screening students for academic risk, predicting student performance over time, providing teachers with feedback on their instruction with individual students, and tracking the progress of students receiving interventions to determine if they are on target to meet their goals (Deno, 2003). Research over the last 30 years has demonstrated evidence of the reliability and validity of CBM tasks in reading, writing, spelling, and mathematics (Deno, Marston, Mirkin, & Lowry, 1982). It is essential that CBM produce reliable data that is predictive of future academic performance (i.e., technically adequate) as well as data that is sensitive to a student’s incremental gains in academic skills across relatively short amounts of time. Accordingly, Fuchs (2004) outlined three areas of research to support CBM, including: (1) technical adequacy of the static score, or a score at one point in time, typically used for screening or identifying students in need of support; (2) features of slope and sensitivity to growth, meaning giving measures regularly over a span of weeks or months for monitoring progress; and (3) impact of CBM data collection upon teacher practices and student outcomes. Technical adequacy is determined by both reliability (i.e., consistency of score) and validity (i.e., score is predictive of target skill or criterion performance). Although there is no universally agreed upon standards of reliability and validity across all types of CBM, the National Center on Intensive Intervention (NCII, n.d.) describes convincing reliability as r≥0.70 for both alternate-form and test-retest reliability. NCII (n.d.) also describes convincing validity as r≥0.60 with a criterion measure. However, some constructs, such as writing, are complex and historically more difficult to measure than other domains (e.g., reading), therefore technical adequacy is generally defined by the various criterion measures within the specific content area. The following sections summarize CBM research across various forms as they relate to technical adequacy, sensitivity to growth, and teacher/student outcomes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.