Abstract
Impact factor for 2012 has just been released. The impact factor of Clinical Microbiology and Infection increased slightly this year from 4.540 to 4.578 (Table 1). It is a marker among others, but it is more pleasant to see it evolve in a positive rather than a negative way. It is calculated based on the citations of the year 2012 on articles 2010 and 2011 divided by the number of articles in 2012. Because of backlog that had been accumulated in the time of publications, CMI issues were substantial with 421 papers published in 2009, 328 in 2010, 366 in 2011 and 385 in 2012 [1,2]. Interestingly the impact factor has just been the subject of petition from researchers who refuse to have it used in the evaluation of their careers [3]. This is interesting but it is difficult to have a definite position on this matter. The impact factor is a witness to the quality of a journal at a given moment in a given specialty, and also reflects the number of scientists working on this field. For example in infectious diseases, a paper on AIDS has a greater chance to be a highly cited paper, as more than 7000 articles are published yearly in this field, than one on Tropheryma whipplei, the agent of Whipple’s disease, with 412 articles in 20 years! (Table 2) This variation by field and domain should be taken in consideration when evaluating researchers’ curriculum vitae (CV). However having referenced articles in successful journals demonstrates that the articles have resisted competition whose difficulty is directly related to the impact factor. In fact most authors are now looking to send their articles to journals with the highest impact factor possible and thus the competition is becoming more difficult in high impact factor journals. Thus in Clinical Microbiology and Infection we now reject 82% of original articles due to the growth of our impact factor. Indeed it is undoubtedly a marker of competitiveness. It is not the only marker for researchers of a certain age, for whom elements such as the total number of citations, the H factor, may also play a role in evaluation. In any case, we cannot underestimate the importance of these objective markers that prevent, as I often had the opportunity to observe, pure subjectivity (often disguising friendship, enmity or intent to avoid competition) in evaluating researchers. Finally, the impact factor of journals raises competition between journals and promotes politics of quality for editors. Authors have many choices as the number of journals is increasing. In contrast there is no or little competition between grants and when searching promotions where the subjectivity of a single evaluator can destroy a career, if it is not tempered by the objectivity of the ranking journals in which the authors have published. Impact factor will remain a marker more objective than a single round of peer reviewing used for grants or promotion. Therefore, the multiple opportunities to be reviewed in a journal makes the evaluation based on ranking of published paper far more fair than that based on grant proposals as recently shown. TABLE 1. Ranking by impact factor
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have