Abstract

Authors and readers often ask about the journal impact factor (JIF). Typically, they want to know what our JIF is in terms of the precise number. When I discuss the matter with them, it is usually evident that they grasp the basic understanding that the higher the JIF, the greater the journal's impact on the body of literature. Yet when I ask what they know about the JIF or why they are inquiring about it, it becomes clear that they have very little knowledge of the biases and limitations related to calculating and interpreting the JIF, and for this reason they do not consider the JIF beyond its face value. Having discussed the JIF with other editors, I think it is important for authors and readers, as well as editors and peer reviewers, to have a better understanding of the uses and misuses of the JIF. The JIF is published yearly in the Journal Citation Reports (JCR), a Thomson Reuters publication that provides a variety of quantitative tools for ranking and comparing journals. The JIF is based on an estimate of the frequency with which a journal's representative scholarly articles are cited during a specified period of time (1Thomson Reuters Science. Introducing the impact factor. Available at: http://thomsonreuters.com/products_services/science/academic/impact_factor/. Accessed March 19, 2013.Google Scholar). Specifically, the JIF is the result of a calculation based on the number of citations made to scholarly articles that a journal has published in a given time period, usually 2 years. Thus, the 2011 2-year JIF for a given journal was calculated as follows:(#ofcitationsin2011toitemspublishedin2010)+(#citationsin2011toitemspublishedin2009)(#ofscholarlycitableitemspublishedin2010and2009) According to Eugene Garfield, one of the creators of the impact factor, the 2-year interval was chosen because it was thought to provide a current measure of a journal's influence. A 1-year measure would give greater weight to rapidly changing fields, whereas longer periods would measure a longer lasting influence (2Eugene G. Journal impact factor: a brief review.Can Med Assoc J. 1999; 161: 979-980Google Scholar). Purportedly, the JIF measures a journal's influence in the scientific community by looking at how many of its articles are cited by authors whose articles appear in a specific subset of other scientific journals called citing journals, as defined by the JCR. The JCR defines the journals that it lists as either citing journals or cited-only journals. Interestingly, self-citations from a cited-only journal are not included in a journal's impact factor calculation. This can make a significant difference in impact factor because self-citations represent about 13% of the citations that a journal receives (1Thomson Reuters Science. Introducing the impact factor. Available at: http://thomsonreuters.com/products_services/science/academic/impact_factor/. Accessed March 19, 2013.Google Scholar). Being a cited-only journal, JFAS' impact factor calculation does not take into account self-citations. The JIF calculation arose from the need to select which journals to include in the Science Citation Index (2Eugene G. Journal impact factor: a brief review.Can Med Assoc J. 1999; 161: 979-980Google Scholar). Eugene Garfield and Irving Sher, founders of the Institute for Scientific Information (ISI), first published the Science Citation Index in 1975. ISI was acquired by Thomson Reuters, which currently publishes Journal Citation Reports, including the JIF, annually. JCR is part of the Web of Science, Thomson Reuters' metadatabase of journal article citations comprising Science Citation Index, Social Science Citation Index, Arts and Humanities Index, Conference Proceedings Citation Index, Index Chemicus, and Current Chemical Reactions. Both Web of Science and JCR are based on the same database of journal citations and cited references. Most of these track citations to specific articles, a function served by many citation databases, including Google Scholar, Proquest, Scopus, and Pubmed, to name a few. In contrast, JCR tracks citations at the journal level, a service that is less common but not unique. Other databases, most notably the Eigenfactor™, also track citations at the journal level. Current uses of the JIF are many and varied. Reference librarians have long used it to manage journal collections. Authors use it to decide to which journal they will submit their articles, journal editors and managers use it as a way to measure their journal's standing in the scientific community, and publishers and other commercial enterprises use it as a market research tool. Other more dubious uses have been the subject of debate, particularly the JIF's use as a measure of the worth of an individual scientist's contributions for purposes of tenure and promotion. Most unfortunately, the JIF has come to represent the scholarly value of any work published in a given journal (3McVeigh M. Mann S. The journal impact factor denominator: defining citable (counted) items.JAMA. 2009; 302: 1107-1109Crossref PubMed Scopus (115) Google Scholar). It is important for us to keep in mind the factors that affect the JIF and to be aware of its limitations when considering the numeric value of the JIF. Because the JIF is based on a simple calculation, factors influencing it can be best understood by considering the element of the calculation they affect: the numerator or the denominator. The numerator is the number of citations in a given year to any scholarly items published in a journal in the previous 2 years. How those items are identified is up to the JCR editorial staff. According to Stephen Hubbard, Senior Editor of JCR, “the JCR is optimized to be as inclusive as possible in collecting citations to the journal as a whole, independently of whether the cited reference is linked to a specific source item in Web of Science. JCR will aggregate all citations to any recognizable variants of a journal's title, and will distribute the citations according to the year of content referenced” (4Hubbard S. McVeigh M. Casting a wide net: the journal impact factor numerator.Learned Publishing. 2011; 24: 133-137Crossref Scopus (21) Google Scholar). There are important restrictions regarding which citations are counted, however. According to the Thomson Reuters website (1Thomson Reuters Science. Introducing the impact factor. Available at: http://thomsonreuters.com/products_services/science/academic/impact_factor/. Accessed March 19, 2013.Google Scholar), “While Thomson Reuters does manually code each published source item, it is not feasible to code individually the 12 million references we process each year. Therefore, journal citation counts in JCR do not distinguish between letters, reviews, or original research. So, if a journal publishes a large number of letters, there will usually be a temporary increase in references to those letters. Letters to the Lancet may indeed be cited more often than letters to JAMA or vice versa, but the overall citation count recorded would not take this artifact into account. Detailed computerized article-by-article analyses or audits can be conducted to identify such artifacts.” The denominator is the total number of scholarly articles published by a journal in a single year that are most likely to be cited (3McVeigh M. Mann S. The journal impact factor denominator: defining citable (counted) items.JAMA. 2009; 302: 1107-1109Crossref PubMed Scopus (115) Google Scholar). The type of articles included in the count are those identified by the Web of Science database as “Article,” “Review,” or “Proceedings Paper” (3McVeigh M. Mann S. The journal impact factor denominator: defining citable (counted) items.JAMA. 2009; 302: 1107-1109Crossref PubMed Scopus (115) Google Scholar). As a rule, the larger the denominator, the smaller the impact factor. Journals that publish a high number of scholarly articles per year will have a lower impact factor than an otherwise comparable journal that publishes fewer such articles in a year. This may be partly offset by a greater number of citations to articles in the journal when more articles are published, depending on the type of articles published. Journal editors can also manipulate the JIF by encouraging or coercing authors to omit citations to reports published in competing journals or, if a citing journal, to cite articles published in their own journal (self-citation). Another factor that could influence the JIF is the time it takes for a manuscript to go through the peer review and revision processes because delays in publication could lead to omission of some citations because the article cited is no longer current (2Eugene G. Journal impact factor: a brief review.Can Med Assoc J. 1999; 161: 979-980Google Scholar). This adverse effect of delays in publication could be lessened by a longer observation period, such as the 5-year JIF, which denotes a journal's lasting influence on the literature. Finally, the JIF of specialty journals such as JFAS may be affected because such journals tend to publish articles that may be confirmatory or hypothesis generating in nature. This type of article is essential to the scientific process but tends not to be cited as often (5Kanaan Z. Galandiuk S. Abby M. Shannon K. Dajani D. Hicks N. Rai S. The value of lesser-impact-factor surgical journals as a source of negative and inconclusive outcomes reporting.Ann Surg. 2011; 253: 619-623Crossref PubMed Scopus (30) Google Scholar). Case studies and tips/techniques reports are examples of the types of article that do not attract many citations. When a journal is trying to raise its impact factor, case studies, the Rodney Dangerfields of scientific research, often go to the chopping block. However, there is much to be said for the essential place of case reports in the scientific process, and I have written previously of the hypothesis-generating function of case reports (6Malay D.S. The value of an interesting case.J Foot Ankle Surg. 2007; 46: 211-212Abstract Full Text Full Text PDF PubMed Scopus (2) Google Scholar). The JIF has been widely criticized as a general metric for a journal's place in the pantheon of scientific publications. Two commonly cited limitations are that the JIF does not account for differences between disciplines in citation patterns and that the JIF does not distinguish between citations from prestigious journals versus lesser-known journals. According to Bergstrom and West (7Bergstrom C. West J. Assessing citations with the Eigenfactor™ metrics.Neurol. 2008; 71: 1850-1851Crossref PubMed Scopus (90) Google Scholar), “…in the impact factor calculation, a citation from Nature is worth no more than a citation from a second-tier review journal, and a citation in the field of mathematics (where bibliographies are short and recent citations are scarce) is worth no more than a citation in the field of immunology (where bibliographies are long and recent citations are common).” The Eigenfactor™ Score (ES) and the Article Influence Score® (AIS) rank journals in a fashion similar to Google's website ranking algorithm, which takes into account not only the number of links to a site but also the quality of the linking sites. Google's exact algorithm is a trade secret; however, it is generally known as PageRank, which is an objective measure of a citation's importance that corresponds to readers' subjective ideas of importance (8Brin S, Page L. The anatomy of a large-scale hypertextual web search engine. Available at: http://infolab.stanford.edu/∼backrub/google.html. Accessed March 24, 2013.Google Scholar). The ES ranking algorithm adjusts for differences in citation styles between disciplines, “For example, the average article in a leading cell biology journal might receive 1–30 citations within two years; the average article in a leading mathematic journal would do very well to receive 2 citations over the same period” (9Eigenfactor.org®. Overview of ranking and mapping scientific knowledge. Available at: http://www.eigenfactor.org/methods.php. Accessed March 24, 2013.Google Scholar). In addition, the ES score reflects the types of journals that are citing articles, and it seems to limit the influence of the denominator in the impact factor equation. According to Rizkallah and Sin (10Rizkallah J. Sin D. Integrative approach to quality assessment of medical journals using impact factor, Eigenfactor, and article influence scores.PLoS One. 2010; 5: e10204https://doi.org/10.1371/journal.pone.0010204Crossref PubMed Scopus (61) Google Scholar), “In general, journals that publish a lot of papers have higher ES values than would be expected for their JIF. Conversely, journals that publish a small volume of papers have lower ES values than expected for their JIF.” The 2011 JIF and ES for JFAS are shown in the Table. It is easy to see the large difference between the total number of citations versus the number of citations in citing journals, a shortcoming of the JIF that authors and readers should consider when they think about the meaning of the JIF. Clearly, the influence of the articles published in JFAS extends far beyond the limits suggested by the JIF. The ES attempts to quantify the types of journals citing the articles published in JFAS and, as previously noted, limits the influence of the denominator in the calculation. These metrics are most helpful when they are used to compare journals that focus on similar content, and readers are encouraged to go to the JCR for this.TableSelected metrics for The Journal of Foot & Ankle Surgery®∗Data from Thomson Reuters. 2011 Journal Citation Reports, New York, Thomson Reuters, 2011.Citable Articles Published in JFAS in 2009 and 2010Total No. Citations†Includes citations in citing and cited-only journals. in 2011 to Articles Published in JFAS in 2009 and 2010Citations‡Includes only citations in citing journals. in 2011 to Articles Published in JFAS in 2009 and 20102-year Journal Impact FactorEigenfactor™ Score25411611310.5160.00241∗ Data from Thomson Reuters. 2011 Journal Citation Reports, New York, Thomson Reuters, 2011.† Includes citations in citing and cited-only journals.‡ Includes only citations in citing journals. Open table in a new tab As we can see, there are several metrics by which a journal's impact on the medical literature, as well as it on its influence on its readers, can be indirectly measured. Because society journals direct their content to meet the needs and desires of their readership, variables that are typically ascertained by means of readership surveys, the type and number of articles published varies from journal to journal, and a broad metric such as the JIF fails to take into account the needs of a specific readership and the influence that a particular journal has on its target audience. I advise readers and authors to keep these limitations in mind when they consider the JIF because meaningful information can be found even in uncited articles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call