enhance the competence of individual students by taking them from where they are instructionally and moving them toward desired educational goals or outcomes. In so doing, these educators typically make decisions about what to teach (instructional level or content) and how to teach (the particular instructional approach, content, instructional strategies, or tactics that work best). Tests are helpful tools in making decisions about what to teach, but they do not help us decide how to teach (Salvia, Ysseldyke, & Bolt, 2010). The best way to make decisions about how to teach is to teach, and gather data on the relative effectiveness of alternative instructional approaches or interventions. Essentially, the task is to monitor progress and use data to make instructional modifications; such is the thinking that underlies precision teaching (Lindsley, 1964), data-based program modification (Deno & Mirkin, 1977), and RTI (Burns, Deno, & Jimerson, 2007). Professionals traditionally have demonstrated instructional validity by attempting to show that performance on ability or achievement measures interacts with specific treatments or instructional interventions to produce differential outcomes. These interactions are variously referred to as aptitude-by-treatment interactions (ATIs) or trait-by-treatment interactions (Cronbach & Snow, 1977). Put simply, the assumption is that test performance predicts the success of differing interventions, and that educators’ knowledge of how students perform on tests helps them differentiate instruction. However, decades of research have yet to identify ATIs (Kavale & Forness, 2000) and the passage of the Education of All Handicapped Children Act (EHA) in 1975 made prediction a moot issue. We entered a zero-demission era in which our focus shifted from making predictions about students’ lives to making a difference in their lives (Reschly & Ysseldyke, 2002). EHA’s successor, the Individuals With Disabilities Education Act (IDEA, 2008), permits schools to use student response to validated interventions (i.e., RTI) to identify learning disabilities. This has resulted in intensified interest in gathering assessment information that suggests valid approaches to instruction, and in heightened attention to data-driven instructional decision making. At the same time, it’s spurred renewed debate about what constitutes instructionally relevant data (Batsche, Kavale, & Kovaleski, 2006). It is important to clarify both the definition and use of instructionally relevant data within RTI. As the unique and complex situations educators encounter