INTRODUCTIONGarfield's Journal Impact Factor (JIF) has been the bibliometric indicator most commonly used by librarians, researchers and research managers. Although JIF has faced criticism, and reports of malpractices have surfaced, it is still considered by some authors the most relevant indicator to evaluate the influence of scientific journals (Zitt, 2012). However, arguments against the arbitrary use of this indicator in research evaluation are well-known, and many authors have disclosed unethical manipulations by journal editors and common misuses by individuals who lack competence in the field of quantitative studies of science (Archambault & Lariviere, 2009; Pendlebury & Adams, 2012; Smith, 2012; Weingart, 2005). Hence, bibliometricians have expressed a need for greater rigor and accuracy in journal assessments, as well as for more inclusive and viable alternatives.The relevance of JIF is directly related to the essential role of the citation indexes created also by Eugene Garfield since 1963, currently covered by Thomson Reuters' Web of Science (WoS). These sources were considered the mainstream of scientific bibliometric analysis for more than four decades. Over the last ten years, the emergence of new citation indexes and wide-ranging scientific databases as Google Scholar or Scopus has in turn brought about the emergence of new journal indicators, long sought after by the academic community (Brown, 2011; Fragkiadaki & Evangelidis, 2014). Scopus, the database of peerreviewed literature developed by Elsevier, has become one of the main data sources for new journal indicators, which have been developed and tested with the aim of complementing and overcoming the limitations of the impact factor highlighted by the scientific literature (Leydesdorff, 2009; Torres-Salinas & Jimenez-Contreras, 2010). Journal Metrics, a new Web service launched by Elsevier in 2014, provides free accessible indicators to measure the citation impact of the journals indexed by Scopus. The impact metrics provided are based on methodologies developed by the Centre for Science and Technology Studies (CTWS) of Leiden University (The Netherlands) and the SCImago Research Group (Spain).Among the journal metrics provided (free of charge) at this website, there were two that were considered as viable alternatives to JIF: the Source Normalized Impact per Paper (SNIP) and the SCImago Journal Rank (SJR). Thomson Reuters also took the initiative to include advanced journal indicators in its Journal Citation Report as the Eigenfactor Score (EFS) (Jacso, 2010). However, judging by Scopus' wide coverage and quality, SNIP and SJR have positioned themselves ahead of JIF and EFS as real contenders to measure the influence and prestige of scientific journals (Falagas, Kouranos, Arencibia-Jorge & Karageorgopoulos, 2008; Leydesdorff & Opthof, 2010; Moed, 2011; Schoepfel & Prost, 2009; Torres-Salinas & Jimenez-Contreras, 2010). Thus, in the race to obtain advanced bibliometric indicators as support tools for peer review, both alternatives are gaining an important degree of acceptance.In spite of their relevance, studies about the use of these new indicators to analyze the behavior of Latin American scientific journals are still scarce. Most of the bibliometric studies of Latin American journals use Thomson Reuters' citation indexes as data sources (CollazoReyes, 2014; Collazo-Reyes, Luna-Morales, Russell, & Perez-Angon, 2008; Gomez, Sancho, Moreno & Fernandez, 1999; Macias-Chapula, 2010; Torricella-Morales, Van Hooydonk & Araujo-Ruiz, 2000).The search for a strategy to enhance the visibility of Latin American journals and to realize their inclusion in the mainstream core, has been an objective of regional scientific policies since the beginning of the 1990s (Gomez, Sancho, Moreno & Fernandez, 1999; Meneghini, Mugnaini & Packer, 2006; Velez-Cuartas, Lucio-Arias & Leydesdorff, 2016; Vessuri, 1995). …