- Research Article
- 10.1177/13563890251383904
- Sep 28, 2025
- Evaluation
- Tom Aston
This ‘roundup’ review of evaluation blogs, podcasts and webinars covers the first half of 2025. It highlights several recent ‘big shifts’ in behavioural science and outlines some of the implications for evaluation. It discusses the uncertain process of translating evidence from experimental studies on conditional cash transfer programmes into policy influence, raising questions about the political economy of evidence-based policy more generally. It also covers recent debates regarding government efficiency in response to the establishment of the Department for Government Efficiency in the United States. Cutting across each of these themes is debates on what counts as ‘rigorous’ evidence and what influences policy and budget decisions.
- Research Article
- 10.1177/13563890251367001
- Sep 3, 2025
- Evaluation
- Marie Broholm-Holst + 1 more
Intervention research, which encompasses the development, implementation, and evaluation of complex interventions, represents a significant area of study within public health. Nevertheless, ethical considerations are limited in prominent guidelines. This article aims to highlight and discuss the importance of ethical reflexivity in the context of public health interventions. The article examines and addresses ethical issues, challenges, and dilemmas associated explicitly with public health intervention research, including informed consent, categorization, stigmatization, and reporting null findings and unintended consequences. We contend that formal and professional guidelines in intervention research require increased flexibility and nuance, including a specific emphasis on ethics where ethical considerations are actively examined and reflected upon.
- Research Article
- 10.1177/13563890251347270
- Aug 9, 2025
- Evaluation
- Anis Ben Brik + 1 more
This study examines the organisational determinants influencing evaluation maturity in the Jordanian public sector, focusing on the interplay of leadership, organisational culture, structure and resource allocation. Grounded in the Resource-Based View theory, the findings indicate that organisational culture is the most significant determinant of evaluation maturity, while resource allocation also plays a crucial role. Although leadership initially demonstrates promise in influencing evaluation practises, its significance diminishes when contextual factors are considered. This suggests that effective evaluation practises are contingent on a supportive organisational culture and adequate resources. The implications of this study extend to public administration reform, emphasising the need for holistic approaches that integrate cultural and resource considerations to enhance evaluation maturity.
- Research Article
- 10.1177/13563890251349537
- Jul 28, 2025
- Evaluation
- Claire Penty Sieffert
What do tools do for evaluators? An affordances framework sheds light on the multiple ways that evaluators interact with tools. “Affordances” are ways that tools enable and constrain action, but the ways that tools do so depend on underlying social conditions. I apply this framework to the case of the Organization for Economic Cooperation and Development Development Assistance Committee criteria, which are widely used in international development evaluation. Through interviews and document analysis, I find that the DAC criteria offer technical and social affordances—they enable and constrain ways that evaluators navigate evaluation’s technical tasks, but they also enable and constrain ways that evaluators interact with others and draw symbolic boundaries around evaluation expertise. I also find these affordances are shaped by social conditions, like an evaluator’s perceptions or position in the organization. These findings suggest the importance of assessing evaluation tools’ multiple dimensions in their social contexts to understand tools’ role in evaluation work.
- Research Article
- 10.1177/13563890251347266
- Jul 28, 2025
- Evaluation
- Zunera Rana + 2 more
The study of unintended effects of policies is a key debate among evaluation scholars. Through complexity theory, we argue that unintended effects of (international) public actions are inevitable and question the reliability of evaluations in providing a correct and complete picture of public policy. We use a machine-learning-assisted text-mining case study approach, examining 254 programme evaluations of German international development cooperation as a ‘least likely case’. While German evaluations focus more on unintended effects than Dutch, Norwegian and American evaluations, their treatment is not always correct or complete. There is an overidentification of unintended effects and a bias towards positive ones, with certain types of unintended effects overlooked. We explore explanations for the observed weaknesses, including an overreliance on linear thinking and insufficient guidance for the evaluators on identifying unintended effects. We conclude with concrete suggestions to improve implementation of the Organization for Economic Co-operation and Development guidelines that are essential to make public administration more effective and trusted.
- Research Article
- 10.1177/13563890251346772
- Jul 23, 2025
- Evaluation
- Carsten Hinrichsen + 2 more
This article argues that evaluation experts stand to benefit significantly from engaging with concepts and methodologies derived from both realist evaluation and implementation science. First, realist evaluation can facilitate the development of nuanced understandings of the intricate interplay among contextual factors, mechanisms of implementation strategies and implementation outcomes. This enables evaluators to explore how, why, and under what circumstances implementation strategies work. Second, viewing the underlying workings of implementation strategies as ripple effects provides insight into their links with the implementation object, that is, the intervention itself. This provides insights into how implementation strategies can influence the workings of an implementation object. Third, realist evaluation allows for an exploration of unintended and unexpected outcomes, which are often overlooked in traditional evaluation frameworks. This is crucial as unforeseen effects may shape the generation of both implementation and client outcomes. These arguments are illustrated through theoretical and empirical reflections drawn from realist evaluation studies.
- Research Article
- 10.1177/13563890251346778
- Jul 17, 2025
- Evaluation
- Tomasz Kupiec + 9 more
This study explores the organisational determinants of evaluation use. It proposes a framework distinguishing between the initial and advanced phases of evaluation practice and predicting what factors determine the type of evaluation use at each phase. Our framework was empirically tested using a survey of 1123 public and non-governmental organisations across five European Union (EU) countries: Czechia, Denmark, Italy, the Netherlands and Poland. Our findings suggest that the adoption mode of evaluation practice influences the dominant type of evaluation use, but this impact is limited to the first few years after the evaluation practice was introduced in an organisation. In the advanced phase, we identified significant relationships between dimensions of legitimacy and types of evaluation use. This suggests that organisational legitimacy remains a promising predictor of evaluation use; however, the specific dimensions and their relationships with different types of evaluation use require further investigation.
- Research Article
- 10.1177/13563890251365303
- Jul 1, 2025
- Evaluation
- Research Article
- 10.1177/13563890251365773
- Jul 1, 2025
- Evaluation
- Elliot Stern
- Research Article
- 10.1177/13563890251350677
- Jul 1, 2025
- Evaluation
- Steven Pudney + 5 more
The use of artificial intelligence in Critical Infrastructure Systems has increased substantially, having evolved to become both technically possible and financially beneficial. Yet there is an emerging consensus that the consideration and management of artificial intelligence-related risks in Critical Infrastructure Systems have not been commensurate with its rapid growth. Our surveys have identified that generalised artificial intelligence principles such as those promoted by the Organisation for Economic Co-operation and Development are alone not fit for purpose in guiding use of artificial intelligence in Critical Infrastructure Systems. Evaluation is an important aspect of that, and we argue for the development of a foundational approach suited to evaluation of artificial intelligence-enhanced Critical Infrastructure Systems as a base to further research and improve practice. This study develops a novel conceptual framework for evaluation of artificial intelligence-enhanced Critical Infrastructure Systems, based on theory adaptation of Value-Focused Thinking. The framework offers simplicity and additional functionality over the default principles-based framework.