Abstract

The increasing use of bibliometric measures is evident in all fields of science. A bibliometric indicator widely applied as a measure of scientific performance is the impact, e.g., the number of citations a particular paper for a period of years after its publication (Peters and Van Raan 1994). The use of impact factors was first proposed by Garfield (1972); values of impact factors (IF) of scientific journals appear in the annual Journal Citation Reports (JCR) of the Institute for Scientific Information (ISI), and this measure is the most widely used approach for journal assessment. In this context, ISI (1993: 6) suggests that users can compare a journal's IF to the baseline for all JCR-indexed i.e. a measure that is based on a mixture of journals from very different disciplines. In contrast, the chairman emeritus of ISI advocates a cautious consideration of the variations of IF among disciplines (Garfield 1994). However, even within a discipline, both mechanical (the yearly rate of increase of items that would be cited, the relative yearly increment of citing journals, and the relative yearly increment of citations) and non-mechanical (how long the journal has been published, number of subscriptions, level of specification, editorial policy, geographic location) factors can influence the number of citations that a journal receives (Ferreiro and Ugena 1992). In addition the impact assessment of scientific journals, IF are used for a variety of other purposes as well: many journals use them in their advertisements; they serve in market research of publishers and others (Garfield 1994); they are used for evaluating the level of scholarship of the contributions of a country a field (e.g., agriculture in India, Garg and Dutt 1992); they are considered in decisions on financial support for the publication of journals (Lilamand 1994); they are used in making decisions about whether cancel journal subscriptions in times of budget constraints (Duerenberg 1993); they are included in the evaluation of candidates' bibliographies for promotion and professorships, comparably the manner citation counts are used (Garfield 1979, Braxton and Bayer 1986); and, in several European countries, they are even used in the evaluation of research groups' contributions. Peters (1991) went beyond the cautions of many information scientists who have indicated that IF should not be used compare different fields (Ribbe 1991, Ferreiro and Ugena 1992). He used low values of IF as evidence of the crisis state of research in one subject area (ecology) as compared the high IF indicative of the current power in other fields (biochemistry and molecular biology). Given that literature-based indices have been used in establishing university science-policy (Moed et al. 1985) and proposed identify emerging fields of science (Leydersdorff et al. 1994), the use of IF as a consideration in funding research (Wade 1975) is not unlikely occur in the future. In contrast these multiple uses of IF, Hargens and Schuman (1990) blame the widespread use of publication counts for the deluge of trivial publications. Furthermore, Tainer (1991) suggests that the widely reported results of citation frequency evaluations (Hamilton 1990) may do more harm than good by encouraging the trendiest science rather than the best, by leading the public assume that most scientific research and publication is a waste of money, and even by prompting (U.S.) lawmakers reduce science funding. Finally, McCain (1994) concludes that citation analyses, however sophisticated, cannot give

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call