>> See video of presentation (24 min.) Digital technologies, growth and globalization of the research community, and societal demand to address the Grand Challenges of our times, are driving changes in the dynamics of research, an evolution sometimes referred to as “Science 2.0”. They impact the entire research workflow, from securing resource, through conducting research, to disseminating the results using more routes than ever before, to peers, industry and society. This broad transmission of the results and benefits of research also paves the way for citizens and civil society organizations to be much more directly and actively involved as “agenda gatekeepers”, with a role in steering research, and perhaps even as funders themselves.These changes result in a more complex research ecosystem, populated by more stakeholders with ever higher expectations. The resources to support this ecosystem are not infinite, and these changes also drive the development of additional approaches into evaluating research alongside the well-established practices of peer review, and of securing expert opinion and narratives. This has driven a growing interest in the use of research metrics, alongside qualitative inputs, in making allocation decisions.In just the same way as the changes leading to Science 2.0 are driven bottom-up, Elsevier believes that the most effective way to embed quantitative insights along the existing qualitative is by endorsing a community-built solution. We are bringing our technical expertise and global reach to bear to facilitate democratic initiatives. One example of this is our engagement with the Snowball Metrics program [1], in which universities agree amongst themselves on metrics that give them useful strategic insights, rather than accept metrics that funders find useful and which are often, in effect, imposed. The initiative tests the methods on all available data sources to ensure they are robust and commonly understood and will support apples-to-apples benchmarking, and publish the metrics “recipes” for free so that they can be used by anyone, for their own purposes and, if applicable, under their own business models.Such engagements have shaped Elsevier’s position on research metrics and their use in research assessment. We recognise the need for a much broader range of research metrics than has traditionally been available: publication and citation metrics remain valuable, but must be complemented by those in other areas such as collaboration, deposition and reuse of research data, and benefit to society. Our vision is to be able to provide quantitative information about the entire research workflow, and we are engaging on several fronts to make this vision a reality.At the same time, we have also learnt about how the research community expects research metrics to be used in a responsible way, and our approach embraces this [2]. We recognize that metrics never reflect 100% of research activity, and that they should always be used together with qualitative inputs: peer review, expert opinion and narrative. The methods underlying any metric should be open to build trust, and to stimulate debate and improvement where needed, so that these same methods can be applied to all data available, whether they are open or proprietary. This consistent approach will bring the greatest benefit to the research community.ReferencesSnowball Metrics Recipe Book: http://www.snowballmetrics.com/wp-content/uploads/snowball-recipe-book_HR.pdf.Elsevier’s position on the role of metrics in research assessment: http://www.elsevier.com/online-tools/research-intelligence/resource-library/resources/response-to-hefces-call-for-evidence-independent-review-of-the-role-of-metrics-in-research-assessment.