Abstract

Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the intellectual distance between the knowledge embodied in research proposals and an evaluator's own expertise systematically relates to the evaluations given. To estimate relationships, we designed and executed a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator-proposal pairs. We find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel. The patterns are consistent with biases associated with boundedly rational evaluation of new ideas. The patterns are inconsistent with intellectual distance simply contributing “noise” or being associated with private interests of evaluators. We discuss implications for policy, managerial intervention and allocation of resources in the ongoing accumulation of scientific knowledge.

Highlights

  • A fundamental challenge that all organizations engaged in scientific and technological innovation face is how to allocate resources across alternative project proposals (e.g., Astebro and Elhedhli 2006, Hallen 2008)

  • We found that evaluators gave systematically lower scores to research proposals that were closer to their own areas of expertise

  • Intellectual Distance and Evaluation Scores The evaluation of proposal i by evaluator j can be shaped by proposal covariates (Xi), evaluator covariates (Xj ), and luck or noise, which we describe with a zero-mean error term

Read more

Summary

Introduction

A fundamental challenge that all organizations engaged in scientific and technological innovation face is how to allocate resources across alternative project proposals (e.g., Astebro and Elhedhli 2006, Hallen 2008). The evaluation process might be understood as a matter of evaluators each discerning a noisy signal of true quality, following a classical statistical decision making under uncertainty perspective In this case, greater intellectual distance (less expertise, greater ignorance) would lead to less precise evaluations but no differences in mean evaluations. Our empirical task is to precisely observe variation in intellectual distance and relate this to evaluation outcomes, independent of conflating factors, including the true quality of research proposals. In relation to intellectual distance, the pattern of lower scores provided by most expert evaluators is consistent with experts more readily “seeing” and “sampling” more informational cues than do less expert evaluators—with experts observing a disproportionately greater number of demerits, problems, and limitations of research proposals.

Advancing Scientific Knowledge and Evaluations
Intellectual Distance and Novelty in the Advance of Scientific Knowledge
Bounded
A Call for Research Proposals from the “First Phase” of a Grant Process
Evaluator Assignment and the Evaluation Process
Main Results
Novel Departures from Existing Research and Evaluation Scores
Evaluation of Alternative Explanations
Bounded Rationality and Expert Cognition Perspectives
Summary and Conclusions
Implications for the Evaluation of Frontier Projects
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.