Abstract

This study examined F1000Prime recommended research and review articles published in Cell, JAMA: The Journal of the American Medical Association, The Lancet, and The New England Journal of Medicine (NEJM) in 2010. The analyses included (1) the classifications assigned to the articles; (2) differences in Web of Science (WoS) citation counts over 9 years between the articles with F1000Prime recommendations and the other articles of the same journal; (3) correlations between the F1000Prime rating scores and WoS citation counts; (4) scaled graphic comparisons of the two measures; (5) content analysis of the top 5 WoS cited and top 5 F1000Prime scored NEJM articles. The results show that most of the recommended articles were classified as New Finding, Clinical Trial, Conformation, Interesting Hypothesis, and Technical Advance. The top classifications differred between the medical journals (JAMA, The Lancet, and NEJM) and the biology journal (Cell); for the latter, both New Finding and Interesting Hypothesis occurred more frequently than the three medical journals. The articles recommended by F1000 Faculty members were cited significantly more than other articles of the same journal for the three medical journals, but no significance was found between the two sets of articles in Cell. The correlations between the F1000Prime rating scores and WoS citation counts of the articles in the same journal were significant for the two medical journals (The Lancet and NEJM) and the biology journal (Cell). NEJM showed significances in both the upper quantile (top 50%), and the upper quartile (top 25%) sets. One of the medical journals, JAMA, did not show any significant correlation between the two measures. Despite the significant correlations of the three journals, Min–Max scaled graphic comparisons of the two measures did not reveal any patterns for predicting citation trends by F1000Prime rating scores. The peak citation year of the articles ranged from 2 to 8 years after the publication year for NEJM. Content analysis of the top-cited and top-scored NEJM articles found that highly commendable papers with comments such as “exceptional,” “landmark study,” or “paradigm shift” received varied rating scores. In comparison, some of the results corroborate with previous studies. Further studies are suggested to include additional journals and different years as well as alternative methods. Studies are needed to understand how F1000 Faculty assign ratings and what criteria they use. In addition, it is also worth investigating how F1000Prime users perceive the meanings of the ratings.

Highlights

  • A challenge facing scientists in the Big Science era is the information explosion, a phenomenon predicted more than 50 years ago (de Solla Price 1961). Bornmann and Mutz (2015) report that the growth of global publication output is at a rate of approximately 3% yearly

  • Recommendations are submitted by F1000Prime Faculty members who were nominated by peers as experts in the fields

  • The rationales for the journal selection are provided in the Introduction

Read more

Summary

Introduction

A challenge facing scientists in the Big Science era is the information explosion, a phenomenon predicted more than 50 years ago (de Solla Price 1961). Bornmann and Mutz (2015) report that the growth of global publication output is at a rate of approximately 3% yearly. Researchers at all levels feel the need for filtering tools because senior researchers no longer have the time to read every journal in their field, and junior researchers lack the required knowledge or expertise to judge the quality of the publications (Pontis et al 2015). To satisfy this need, F1000 invented a new service to provide post-publication peer recommendations. In 2009, F1000 merged its two earlier services, F1000 Biology (inaugurated in 2002) and F1000 Medicine (launched in 2006), into one single database, F1000Prime. F1000 faculty members can submit a comment as Dissent without a rating score

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.