Abstract

Journal editors and publishers, authors of scientific papers, research directors, university and research council administrators, and even government officials increasingly make use of so-called ‘Impact Factors’ to evaluate the quality of journals, authors and research groups. These figures are used in decision-making processes about (dis)continuation of journal subscriptions, selection of journals for submission of papers, ranking of authors and groups of authors, and even for increase and decrease of funding to research groups. All data are based on the counting of citations of the scientific papers of authors. Very few users appear to realize that these figures can be seriously wrong, biased and even manipulated, as a result of: (i) citation habits for authors in different fields, (ii) selectivity in (non)citations by authors, (iii) errors made by authors in citation lists at the end of papers, (iv) errors made by ISI in entering publications and citations in databases, and in classifying citations and accrediting them to journals and authors, and (v) incomplete and misleading impact figures published by ISI. Although quite a few bonafide and competent analysts and organisations specialized in citation analyses exist, the incompetence of many analysts, when using crude ISI data in discussing rankings of journal and/or authors, is an additional factor that makes such analyses often unreliable.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.