Abstract

• We examine whether widespread calls for impact evaluation influence standards intended to guide behavior of development actors. • Among 42 standards, we find wide adoption of principles of information sharing, participation, and listening to stakeholders. • We find two distinct discourses of evaluation, one impact centric that emphasizes causal attribution and one more more generalist. • Donors and global organizations tend to be impact centric; NGOs and national level associations more generalist. • Calls for participatory approaches appear well incorporated into standards; calls for more rigorous impact evaluation are not. For several decades, the aid effectiveness movement has called for more robust, informed and independent impact evaluation of aid activities, but the prevalence and adoption of these practices remain unclear. This article seeks to understand the current state of impact evaluation practice in the development field by examining standard-setting documents intended to guide the behavior of entities involved in development assistance. We explore these standards as representations of institutional logics that encode current norms, practices, and expectations for these actors and examine the extent to which impact evaluation norms and practices are enshrined within these standard-setting documents. To do so, we examine guidance from a diverse set of 42 standards to better understand how evaluation is conceptualized and what standards are being articulated. We find both convergence and divergence in the institutional logics employed and in how evaluation norms and practices are incorporated into standards. We see convergence in the adoption of a normative, process-oriented logic that appears across many entities in the widely articulated commitment to practices such as information sharing, participation, and listening. We find more divergence in the adoption of a results-oriented logic that implies a commitment to impact evaluation. These distinct logics give rise to two discrete discourses: an “evaluation generalist” discourse that conceptualizes evaluation in broad terms and an “impact centric” discourse that articulates a more comprehensive set of principles emphasizing causal attribution. We suggest that structural characteristics and positionality in the aid system may help explain the adoption of different institutional logics and associated evaluation practices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call