Abstract

• We used machine learning and qualitative text analysis to study grant peer review. • Cluster analysis revealed that content of review reports matched the predefined evaluation criteria. • Outcome of grant peer review is more influenced by proposals' weaknesses than strengths. • Features of review reports were consistent among disciplinary evaluation panels. The evaluation of grant proposals is an essential aspect of competitive research funding. Funding bodies and agencies rely in many instances on external peer reviewers for grant assessment. Most of the research available is about quantitative aspects of this assessment, and there is little evidence from qualitative studies. We used a combination of machine learning and qualitative analysis methods to analyse the reviewers' comments in evaluation reports from 3667 grant applications to the Initial Training Networks (ITN) of the Marie Curie Actions under the Seventh Framework Programme (FP7). Our results show that the reviewers' comments for each evaluation criterion were aligned with the Action's prespecified criteria and that the evaluation outcome was more influenced by the proposals’ weaknesses than by their strengths.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call