Abstract

Background: Decisions about which applications to fund are generally based on the mean scores of a panel of peer reviewers. As well as the mean, a large disagreement between peer reviewers may also be worth considering, as it may indicate a high-risk application with a high return. Methods: We examined the peer reviewers' scores for 227 funded applications submitted to the American Institute of Biological Sciences between 1999 and 2006. We examined the mean score and two measures of reviewer disagreement: the standard deviation and range. The outcome variable was the relative citation ratio, which is the number of citations from all publications associated with the application, standardised by field and publication year. Results: There was a clear increase in relative citations for applications with a better mean. There was no association between relative citations and either of the two measures of disagreement. Conclusions: We found no evidence that reviewer disagreement was able to identify applications with a higher than average return. However, this is the first study to empirically examine this association, and it would be useful to examine whether reviewer disagreement is associated with research impact in other funding schemes and in larger sample sizes.

Highlights

  • Winning funding is an important stage of the research process and researchers spend large amounts of their time preparing applications[1]

  • Application data We examined 227 successful grant applications submitted to the American Institute of Biological Sciences between the years 1999 to 2006

  • There was a strong correlation between the standard deviation and maximum (0.80), but not between the standard deviation and minimum (0.05)

Read more

Summary

Introduction

Winning funding is an important stage of the research process and researchers spend large amounts of their time preparing applications[1]. Decisions about which applications to fund are generally based on the mean scores of a panel of peer reviewers. As well as the mean, a large disagreement between peer reviewers may be worth considering, as it may indicate a high-risk application with a high return. Methods: We examined the peer reviewers' scores for 227 funded applications submitted to the American Institute of Biological Sciences between 1999 and 2006. Conclusions: We found no evidence that reviewer disagreement was able to identify applications with a higher than average return. This is the first study to empirically examine this association, and it would be useful to examine whether reviewer disagreement is associated with research impact in other funding schemes and in larger sample sizes

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call