Abstract

There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.

Highlights

  • Love them or loathe them, ‘objective’ metrics designed to measure a peer-reviewed journal’s performance relative to others are here to stay

  • Ranks were axiomatically similar based on the jackknife approach (S2 Fig), the estimated uncertainty was narrower (S3 Fig) given the low number of journal metrics (5) from which to jackknife

  • The ecology-specific sample of 25 journals (Ecology + Multidisciplinary) yielded another overlapping ranking (Fig 2A) that diverged for some journals from the resampled mean ranks derived from the survey of publishing ecologists (Fig 2B)

Read more

Summary

Introduction

Love them or loathe them, ‘objective’ metrics designed to measure a peer-reviewed journal’s performance relative to others are here to stay. Critics have shown that the Impact Factor does not compare well among disciplines [9, 10], it tends to increase over time regardless of journal performance [10, 11], and the methods behind its calculation are not transparent (what types of articles are counted). This has led to gaming, and as a result, there have been many suggested modifications to the algorithm [1, 2, 12,13,14]. The Impact Factor is entrenched in the psyche of researchers and has arguably changed the dynamic of journal assessment and bibliometrics more than any other single method [15]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.