Abstract

HIS SPECIAL ISSUE presents the results of a number of studies that have been undertaken in the PRIME Network of Excellence and in ERAWATCH to characterise and compare public funding of research activities and, more specifically, project funding, broadly meaning funds allocated to research teams from external agencies to perform research activities limited in time, scope and budget. The relevance of this issue in the framework of today’s public debate on how to promote research activities at the national and European level is very clear, as well as the need to dispose of systematic comparisons of the different national funding systems and of their evolution over time. Also, there is a large body of literature in science policy studies dealing with the organisation of public research funding and debating the advantages and disadvantages of the models and organisational structures; sufficient to mention the debate concerning the need to provide enough support to basic investigatordriven research and to avoid too unilateral an orientation towards immediately useful research. While building on this stock of knowledge and the conceptual models and categories developed by a number of authors (for which we refer to the theoretical part of the papers), our work has been based on an important methodological innovation — the attempt systematically to produce a set of descriptors and indicators to compare funding systems quantitatively across countries and time. Since R&D statistics largely disregard the issue of characterising the allocation channels of public money, we have devised a specific methodology based on the collection of data from public budgets and funding agencies, and then their elaboration to provide comparative indicators. At the same time, project participants had to provide systematic and standard descriptions of the funding instruments, because we knew from previous work that the lack of this information in a usable form greatly impaired comparative work. This methodological work took almost two years and is presented in detail in a companion paper published in Research Evaluation (Lepori et al, 2008a): the reader should thus refer to it to understand the background of the figures presented here. It is, however, important to understand correctly the difference (and complementarity) between this work and the development of R&D statistics as undertaken by official bodies. While the work of official bodies aims to construct a systematic set of data concerning research expenditures in a long-term and comparative perspective by collecting original data through surveys (Godin, 2005), our aim was more modest. It was to provide, through the ad hoc elaboration of existing data, the answer to some very specific and localised questions, such as comparing across countries the portfolio of public funding or measuring the evolution over time of its share in total public funding. Hence the indicators proposed in these papers are good examples of what we called ‘positioning indicators’, that is, ones aiming to characterise the position and linkages of the different actors in the national innovation systems (Lepori et al, 2008b). A second relevant feature of these indicators is that they are not meant to provide complete descriptions of the reality on their own, nor to be used T

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call