Abstract

Science has recently been accelerating at a fast rate, resulting in what has been called “information overload” and more recently “filter failure”. In this perspective, journal performance indicators can play an important role in journal evaluation. Opinions on the appropriate use of journal-level bibliometrics indicators can be divided but they have now long been used as measures in research evaluation, and many editors see it as part of their editorial duty to try and improve bibliometrics indicators and rankings for their journal. There are various techniques through which this can be attempted, some more ethical than others. Some editors may try to boost the bibliometrics performance of their journals through gratuitous citations. This is problematic because citations are meant to provide useful references, scientifically justifiable, to previously published literature. As such citations can be used as widely accepted measures of scientific impact. Therefore, superfluous citations can distort the validity of bibliometrics indicators. It might be tempting to try to improve a journal's bibliometrics rankings at all costs, but these are only as meaningful as the data that feed into them. Exceedingly inflated indicators due to unethical behaviours can damage the reputation of a journal and its editors, and can lead to a loss of quality manuscript submissions, which in turn is likely to affect the journal's future citation impact.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.