Abstract

AbstractTechnology is being developed to support the peer review processes of journals, conferences, funders, universities, and national research evaluations. This literature and software summary discusses the partial or complete automation of several publishing‐related tasks: suggesting appropriate journals for an article, providing quality control for submitted papers, finding suitable reviewers for submitted papers or grant proposals, reviewing, and review evaluation. It also discusses attempts to estimate article quality from peer review text and scores as well as from post‐publication scores but not from bibliometric data. The literature and existing examples of working technology show that automation is useful for helping to find reviewers and there is good evidence that it can sometimes help with initial quality control of submitted manuscripts. Much other software supporting publishing and editorial work exists and is being used, but without published academic evaluations of its efficacy. The value of artificial intelligence (AI) to support reviewing has not been clearly demonstrated yet, however. Finally, whilst peer review text and scores can theoretically have value for post‐publication research assessment, it is not yet widely enough available to be a practical evidence source for systematic automation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call