Abstract

Standards-based score reports interpret test performance with reference to cut scores defining categories like "below basic," "proficient," or "master." This article first develops a conceptual framework for validity arguments supporting such interpretations, then presents three applications. Two of these serve to introduce new standard-setting methods.The conceptual framework lays out the logic of validity arguments in support of standards-based score interpretations, focusing on requirements that the performance standard (i.e., the characterization of examinees who surpass the cut score) be defensible both as a description and as a normative judgment, and that the cut score accurately operationalize that performance standard.The three applications illustrate performance standards that differ in the breadth of the claims they set forth. The first, a "criterion-referenced testing" application, features a narrow performance standard that corresponds closely to performance on the test itself. The second, "minimum competency testing," introduces a new standard-setting method that might be used when there is a weaker linkage between the test and the performance standard. The third, a contemporary standards-based testing application, proposes a new procedure whereby the performance standard would be derived directly from the specification for the test itself.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call