Abstract

Program evaluators have paid little attention in the literature to the manner in which measuring the quality of implementation with observations requires tradeoffs between rigor (reliability and validity) and program evaluation feasibility. We present a case example of how we addressed rigor in light of feasibility concerns when developing and conducting observations for measuring the quality of implementation of a small education professional development program. We discuss the results of meta-evaluative analyses of the reliability of the quality observations, and we present conclusions about conducting observations in a rigorous and feasible manner. The results show that the feasibility constraints that we faced did not notably reduce the rigor of our methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call