Abstract

ecisions in research manage-ment,such as hiring individuals orfunding departments,are often driven bymetrics ranking research quality.Towardthe end of every June, researchers waitanxiously for Thomson Scientific to re-lease the latest ISI impact factors so theycan update the ranking oftheir academicachievements.Impact factors are almostuniversally accepted as the standard mea-sure of journal quality, and hence ofresearcher quality too.Whether the jour-nals in which one has recently publishedhave seen their impact factor rising orfalling, or whether those journals havebeen included or excluded from the ISIimpact factor list,can dramatically affectoneOs career.Recently, alternative indicators forranking research have been put forward.Hirsch (2005) proposed an indicatorcalled the h index to evaluate the pro-ductivity of scientists (Ball 2005).A sci-entistOs h index is the highest number ofhis or her papers that have each receivedat least that number ofcitations.Becauseindividual researcher and journal rank-ings are intrinsically related,it has beensuggested the h index could be fairly usedto rank journals as well (Braun et al.2005).Doing so would avoid the pitfallsofother commonly used ranking meth-ods (Kokko and Sutherland 1999,Cock-erill 2004),and would have the benefit ofmeasuring both significance and sus-tainability in scientific production; anadded benefit is that the h index is dif-ficult to manipulate. Any journal thatpublishes papers with a seminal,long in-fluence would be rewarded by a higher hindex,whereas its ISI impact factor (ifithas one) would not be affected by cita-tions more than two years after an arti-cleOs publication.Ifjournals were rankedaccording to their h index,the hierarchywould better reflect journal status, asshown in the box for journals with a fo-cus on biology.For example, BioSciencehas a 2004 ISI impact factor of 3.041,but its h index of35 would put it amongthe leading journals for the biologicaldisciplines.The h index, however, is hardly in-tended to be the ultimate ranking indi-cator.Other indices have been suggestedsince the publication of HirschOs paper.Taber (2005) commented that it may bebetter to use a O c index,Owhich would bethe total number of papers from a re-searcher (or a journal) cited more thanonce by other research groups (journals)in the most recent calendar year.Bollenand colleagues (2006) followed a differ-ent approach,one that distinguishes jour-nal popularity from prestige:Popularityreflects the crude number of citations,whereas prestige reflects the quality ofthe publications citing a journalOs articles(Ball 2006).Bollen and colleaguesOrank-ing algorithm is similar to the complexone used by Google to rank Web pages insearch results,called PageRank.Althoughresults from the PageRank-based ap-proach differ somewhat from ISI impactfactors,there is much overlap.Every indicator will have its ownstrengths and weaknesses,but we believethat the main advantage of the h indexhas not been stressed enough.Given theimportance of ranking in todayOs com-petitive scientific environment,it is crit-ical to remember that researchers are theconstituency most in need of a rankingindicator that is both fair and easy tocompute. The algorithm behind the hindex should make it the favorite for theresearch community.Because the h indexis based solely on the overall number ofcitations,not on citations during a giventime period or on the quality ofthe jour-nals in which the citations appear,it canvery easily be computed from most lit-

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.