Abstract

Assessment provides a foundation on which school psychology is built (Ysseldyke et al., 2006), and educational in this country is at a major crossroads. Technology and technology-based assessment, highstakes testing, and the increased infusion of English language learners, among other factors, have heavily influenced practices and even challenged some of our basic assumptions (Fadel, Honey, & Pasnik, 2007; Pitoniak et al., 2009). Moreover, the predominant paradigm has shifted from of learning to for learning (Stiggins, 2005), a move that started in the late 1960s and early 1970s with the push for formative as opposed to summative evaluation (Bloom, Hastings, & Madaus, 1971). School psychology embraced the formative evaluation movement, which has resulted in positive effects for children and youth, but needs to continue to push practices in kindergarten through Grade 12 schools to remain relevant and to continue to improve the lives of children. Recent reforms in occurred somewhat simultaneously with a reconceptualization of validity. Messick's (1989) seminal definition of validity conceptualized it as integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment (p. 13). However, validity evidence has historically relied on correlations between similar measures rather than evaluating judgments and inferences. Certainly correlations with other tests can be an aspect of validity research, but relying solely on correlational evidence represents a weak program (Kane, 2001; p. 326) and should only be included if those data are theoretically relevant. Moreover, it may be difficult to identify a satisfactory criterion and establishing validity through correlations between tests results in conceptual circularity (Kane, 2001). Thus, correlations could be reported within the context of a science of diagnosis that researches meaningful decision thresholds and the diagnostic accuracy associated with those thresholds (Swets, Dawes, & Monahan, 2000), but research within school psychology should move beyond simple correlations between related measures. Kane (2006) recently argued against a criterion model and supported Messick's (1989) construct validity approach that evaluates the decisions made with the data rather than the data themselves. Construct validity can only be supported through a line of inquiry as outlined by Kane (2001, p. 330): 1. State the proposed interpretive argument as clearly and explicitly as possible. 2. Develop a preliminary version of the validity argument by assembling all available evidence relevant to the inferences and assumptions in the interpretive argument. 3. Evaluate (empirically and/or logically) the most problematic assumptions in the interpretive argument. As a result of these evaluations, the interpretive argument may be rejected, or it may be improved by adjusting the interpretation and/or the measurement procedure in order to correct any problems identified. 4. Restate the interpretive argument and the validity argument and repeat Step 3 until all inferences in the interpretive argument are plausible, or the interpretive argument is rejected. Although Kane (2001) outlines clear steps for validity research, he also cautions that it is not easy to implement. Accepting a construct approach to validity emphasizes the importance of theory in the process (Messick, 1995). Thus, validity research begins with developing or adopting a theory and decisions are supported as valid if the observed data and resulting decisions are consistent with the theory (Cronbach & Meehl, 1955, Kane, 2001). Theories should be able to successfully predict observable behavior and research should critically examine how well the data do in fact accurately predict in a theoretically consistent manner. …

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call