Abstract

In this paper we focus on the governance, in particular evaluation and monitoring, of the growing number of transdisciplinary collaborations (TDC’s). Researchers and a variety of stakeholders collaborate in such TDC’s, the purpose of which is to address societal challenges, like renewable energy, healthy aging or better language teaching in schools. Commonly used practices for evaluation of scientific research (accountability, rankings and benchmarking, dedicated to scientific excellence) do not fit the goals of TDC’s. A bottom up or stakeholder oriented approach is better suited; one that stimulates mutual learning as well as the development of socially robust knowledge. We introduce the participatory impact pathways analysis (PIPA), as a method that suits the requirements. It has been developed in the context of development research. Two crucial features are the involvement of stakeholders from the start, and the joint development of a theory of change. This narrates what one wants to achieve and how that will be achieved. From this, stakeholders construct a logical frame that serves as a source for indicators. These indicators enable monitoring ex durante, during the TDC. We present evidence of the use of PIPA for a TDC. From this empirical evidence a number of issues with regard to evaluation, monitoring and indicators can be identified that require attention. Most prominent is the change of function of indicators. Instead of looking back and a focus on past performance, indicators look forward, in the short, intermediate and more distant future.

Highlights

  • Scientific research, societal and industrial innovation and government policy increasingly are getting intertwined in transdisciplinary networks or consortia, transdisciplinary collaborations (TDC’s)

  • In this paper we focus on the governance, in particular evaluation and monitoring, of the growing number of transdisciplinary collaborations (TDC’s)

  • Before we elaborate on new ways of evaluation for TDC’s, we highlight an important conceptual distinction between two main functions of evaluation: (1) evaluations primarily conducted to account for and (2) evaluations that aim at mutual learning and improving (Scriven 1991, 1996)

Read more

Summary

Introduction

Scientific research, societal and industrial innovation and government policy increasingly are getting intertwined in transdisciplinary networks or consortia, TDC’s. The overarching goal of these programs is to focus (academic) research on those sectors or fields that are deemed vital for the economy, and/or on issues that are politically important or controversial. One-size-fits-all indicators are not adequate to evaluate research in the context of a TDC; instead indicators are needed that suit each specific context and TDC. The challenge for evaluation of TDC’s is major because are different indicators needed, they have to be attuned somehow in a meaningful way. An incremental change, such as an extra indicator relating to societal aspects, is not enough. We conclude (Sect. 7) that, given that there is a fundamental change in the context of research, evaluation methods, criteria and indicators, should change in a fundamental way too

Changing policy context: the quest for relevance
Governance of TDC’s
Effects on the evaluation of research and innovation
Applying PIPA: design and results
The workshops
Case example: education
Case example: new materials
General observations
Discussion and conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call