I twas during the spring of 2007 that the publishing community at large first became aware of the Eigenfactor and its companion indicator, the Article Influence [1]. These two indicators, created by Dr. Carl Bergstron and colleagues from the University of Washington’s Department of Biology, relied on the structure of the entire citation network of scholarly communication to measure the prestige of a journal, rather than simply relying on the numbers of citations received. Conceptually, the measure can be understood in terms of a citation from a more prestigious journal being worthmore than a citation from a less prestigious journal. The Eigenfactor relied upon a class of network statistics called eigenvector centrality measures, hence the name, where the leading eigenvector of the cross-citation matrix derived from Thomson Reuters’ entire Journal Citation Report (JCR) is calculated in an iterative fashion, and this is then used to calculate each journal’s Eigenfactor. While this class of algorithm is wellknown, indeed it has been used in a journal citation context as far back as 1976 [2], it was the first time that it been applied to the entire JCR dataset. As time progressed, the Eigenfactor and Article Influence were incorporated into the JCR, and now sit alongside the Impact Factor. Now in JCR 2008, released in June 2009, in addition to a new set of Impact Factors, we have a new set of Eigenfactor and Article Influence measures. The presence of these indicators provokes interesting questions as to howone fundamentallymeasures the “impact” of a journal, and whether some indicators are preferable to others. Table 1 depicts the Impact Factor, 5-Year Impact Factor, Total Citations, Eigenfactor, and Article Influence values of the Journal of Sexual Medicine (JSM) between JCR 2007 and 2008. The explanation for the drop in values across all but the Total Citations figure can be broadly explained by the period of rapid expansion that JSM has undertaken. In terms of papers classified by ISI as source items, (a piece of original research, a review article, or a proceedings paper—if the work mentions that the work was presented, in whole or in part, at a conference) JSM has grown from 40 published papers in 2004 to a projected 400 by the end of 2009. Growing a journal 10-fold while maintaining the overall quality of the published articles is a challenging balancing act. As a journal’s Impact Factor rises, so does the submission rate. An acceptance rate that once generated a stable output for quarterly publication now produces more papers than the journal can sensibly manage, and so standards are tightened. This process does not occur instantaneously, however; it occurs incrementally in response to the submissions in hand. This inevitably leads to an increased spread in the citeability of the published articles, leading to a drop in the average citations per article compared to a period where submissions and publications were much lower. By contrast, the explanation for the observed difference in relative rankings of JSM by these different indicators can largely be related to the characteristics of these indicators, summarized in Table 2. The Impact Factor is well understood, taking the citations in a given year, referred to as the census period, to papers published in the prior 2 years, the target period, divided by the number of source items in the target period. The 5-Year Impact Factor has the same single year census period, but uses a 5-year target period, and the denominator is the number of the number of source items published in the target period. The Total Citations figure also uses a single-year census period, but the target period is now all years of publication. As with Total Citations, the Eigenfactor is not scaled to the article level; all things being equal, a larger journal will have a larger Eigenfactor than a smaller journal. More importantly, it removes the contribution from within-journal selfcitation before beginning the iterative process of 2976