Abstract

Despite bipartisan support in Washington, DC, which dates back to the mid-1990s, the “what works” approach has yet to gain broad support among policymakers and practitioners. One way to build such support is to increase the usefulness of program impact evaluations for these groups. We describe three ways to make impact evaluations more useful to policy and practice: emphasize learning from all studies over sorting out winners and losers; collect better information on the conditions that shape an intervention's success or failure; and learn about the features of programs and policies that influence their effectiveness. We argue that measurement of the treatment contrast that exists between the intervention and comparison condition(s) is important for each of these changes. Measurement and analysis of the treatment contrast will increase cost and policymakers and practitioners already see evaluations as expensive. Therefore we offer suggestions for reducing costs in other areas of data collection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call