Abstract

Evaluators often invest much effort into designing evaluation studies. However, there is evidence that less attention is paid to measurement. One possible explanation is that focus in applied psychometrics is on reliability, with less placed on measurement model misspecification and the bias it can introduce into estimates that use resultant scores. Another possible explanation is that evaluators frequently want to use the simplest scoring approach possible under the assumption that it is transparent and therefore relies on fewer assumptions—a mindset that is often, if not always, misguided. In this study, we walk through the decisions involved in producing scores for program evaluation studies in an attempt to demystify the psychometrics, as well as show how related decisions can be consequential. We use Monte Carlo simulations to illustrate the effects of those decisions in a randomized control trial, then show that these decisions can impact published evaluation results. Finally, we try to give evaluators best practices in scoring for evaluation, including understanding when deviating from those practices is most likely to impact their work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call