Abstract

Currently, the scientific community uses several bibliometric indices to define the impact of a scientific publication and the journal in which it was published. One of these parameters is the science citation index, a valid way to assist librarians in managing bibliographic control and costs effectively. The citation index quantifies the number of citations a particular publication receives. In turn, this information is used to calculate a journal-specific parameter, the journal impact factor [1, 2]. The impact factor is defined as the average number of citations received per paper published in a specific journal during the preceding 2 years. These two parameters have since evolved differently from their original intention: both are used as quantifiable measures of quality, of the scientist and of the journal in which the scientist publishes. A third parameter, the so-called H-index, is an alternative to the citation index. The H (or Hirsch) index attempts to measure both the productivity and impact of the published work of a scientist. The H-index is based on a set of the scientist’s most cited papers and the number of citations they have received in other publications. For many individuals (and institutions), the H-index has turned into the ‘hype’ index [3]. The 2011 impact factor of the Netherlands Heart Journal is 1.438. The 2011 impact factor was calculated as follows: in 2011 there were 107 citations to articles published in 2009 and in 2010 there were 100, resulting in a total of 207 citations. The number of articles published in 2009 was 68 and in 2010 this was 76, resulting in a total of 144 articles. As a result, the 2011 impact factor was 207 citations divided by 144 articles; 207/144 = 1.438. The NHJ impact factor has therefore remained rather stable over the past 2 years: 1.392 in 2009, and 1.447 in 2010. How important are impact factors? Over time, impact factors have become the holy grail in the scientific journal domain. Many authors want to publish in journals with the highest impact factors because it will increase their scientific image, their professional profile, and their academic career perspectives. In some institutions or departments publication in journals with an impact factor below five is even viewed as ‘mediocre scientific quality’. As a result, every journal editor works hard to improve his or her journal’s impact factor because it is viewed by publishers as an index of journal quality and success, determining the extent to which the journal is resourced by its sponsoring organisation or publisher. However, there are many confounders that may influence the impact factor, at least challenging the scientific significance of an impact factor [4]. There are several ways to artificially improve the impact factor of a journal. From the perspective of a Chief Editor —as the impact factor is determined by citations divided by articles— one can increase the number of citations and one can decrease the overall number of published articles. There are many examples of this manipulative strategy. For instance, editors may stimulate (or even ‘force’) authors to cite papers published in the journal in which they are to be published. Deliberate publication of large industry-supported trials may generate many citations thereby influencing the impact factor. This also increases the journal’s income through reprint sales and might thereby be a source of conflicts of interest for journals [5]. Furthermore, the number of citable items (i.e. papers that can be cited) can be purposely limited to only original and review articles, being another way of increasing the impact factor. An approach that has become ‘en vogue’ in many journals is to compose an article with ‘the best of’, i.e. to publish a review article referring to usually over 100 papers (read: citations!) from the previous year in the same journal. These manipulative approaches are increasingly used by editors (too) eager to improve and sustain an impact factor of high value. In my view, these editorial policies balance on the razor’s edge of scientific integrity [6–9]. As an alternative to the impact factor, in 2002 Darmoni et al. [10] proposed a ‘reading index’, which equals the ratio of e-page views of articles within a specific journal to overall e-page views. Using this ratio for 46 biomedical journals of widely variable impact factors in 1997, these authors found that there was no correlation between the impact factor for a journal and the reading index. This illustrates the inadequacy of the impact factor even for the purpose for which it was originally intended, being a measure of the journal’s use and reader appeal [11]. However, at present no valid alternative to the impact factor has gained sufficient ground. Since the impact factor is becoming more widely institutionalised, both academically and commercially, it is not very likely that a substitute will be developed in the near future. Until then, we are at the mercy of this ‘poor man’s best’ parameter for a true journal quality index [12].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call