Abstract

This article illustrates an argument-based approach to presenting validity evidence for assessment items intended to measure a complex construct. Our focus is developing a measure of teachers’ ability to analyze and respond to students’ mathematical thinking for the purpose of program evaluation. Our validity argument consists of claims addressing connections between our item-development process and the theoretical model for the construct we are trying to measure: attentiveness. Evidence derived from theoretical arguments in conjunction with our multiphased item-development process is used to support the claims, including psychometric evidence of Rasch model fit and category ordering. Taken collectively, the evidence provides support for the claim that our selected-response items can measure increasing levels of attentiveness. More globally, our goal in presenting this work is to demonstrate how theoretical arguments and empirical evidence fit within an argument to support claims about how well a construct is represented, operationalized, and structured.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call