Abstract

The unified conceptualization of validity with regard to content-related evidence has been expressed succinctly by the authors of the Standards for Educational and Psychological Testing (AERA et al., 1985): Content-related evidence of validity is a central concern during [instrument] development, whether such development occurs in a research setting, in a publishing house, or in the context of professional practice. Expert professional judgment should play an integral part in developing the definition of what is to be measured, such as describing the universe of content, generating or selecting the content sample, and specifying the item format and scoring system. Thus, inferences about content are linked to [instrument] construction as well as to establishing evidence of validity after [an instrument] has been developed and chosen for use. (p. 11) This article has demonstrated the process of collecting content-related validity evidence in terms of the specific requirements of the Standards. Five standards were identified and interpreted according to the initial stages of instrument construction: domain specification, item development, and item, subscale, and scale content validation. The role of expert judgment during these stages and the variety of evidence that can be gathered were described. For most instruments, the review process would necessitate two meetings of 1 to 2 hours each to review the domain specifications and another two meetings to determine the match between the items and the specifications. The importance of these 8 hours or whatever additional time is needed to obtain the validity evidence was emphasized. Finally, an application of the Standards was provided to illustrate step-by-step how the judgmental review process can be planned and executed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call