Abstract

ABSTRACT The Journal Impact Factor (JIF) is a widely used metric for ranking journals based on the number of citations garnered by papers published over a specific timeframe. To assess the accuracy of JIF values, I compared citation counts for 30 of my own publications across six major bibliography databases: CrossRef, Web of Science, Publisher records, Google Scholar, PubMed and Scopus. The analysis revealed noteworthy variations in citation counts, ranging from 10% to over 50% between the lowest and highest citation counts. Google Scholar records the highest citation numbers, while PubMed reported the lowest. Notably, Web of Science, whose citation data are used in JIF calculations, tend to underestimate citation counts compared to other databases. These observations raise concerns about the accuracy of JIF calculation based on Web of Science’s citation data. The real JIF values for most journals would differ from those annually reported by Clarivate’s journal citation reports (JCR). These citation discrepancies underscore the importance of comprehensive data collection and the necessity to include additional citation sources. Not because a paper is cited in one journal rather than another should it have a less or more citation weight. Ultimately, one citation remains one citation, regardless of its origin. Clarivate Analytics may thus need to consider integrating all citation sources for more accurate JIF values. Alternatively, Google Scholar could potentially develop its own journal or citation impact based on its extensive journal citation records. However, while making adjustments to how the Journal Impact Factor is calculated can make it more mathematically precise, it doesn’t address the fundamental biases built into the metric. Even with refinements, the Journal Impact Factor will remain skewed due to how it’s defined and used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call