Alternative metrics – how do research managers and administrators view them?
This is a guest post contributed by Christina Lohr, Product Manager Research Metrics at Elsevier.
In July last year, Elsevier launched an Article Metrics Module in Scopus. The Article Metrics Module was designed together with users around the world to facilitate the quick evaluation of both the impact and broader engagement of an article through providing a comprehensive basket of metrics across ‘traditional’ citation-based, as well as so-called ‘alternative’, metrics. This reflects the view that no single metric can provide the whole story and that a basket of metrics is needed to help make informed decisions.
Within the Article Metrics Module, alternative metrics are divided into four categories as defined by Snowball Metrics, namely:
- Scholarly Activity – Downloads and posts in common scholarly platforms such as Mendeley and CiteULike.
- Scholarly Commentary – Reviews, articles, blogs and comments by experts and scholars, in online tools typically used by academic scholars such as F1000 Prime, research blogs, and Wikipedia.
- Mass Media – Coverage of research outputs in mass or mainstream media outlets (e.g. press clippings and news websites).
- Social Activity – The extent to which an output has stimulated social media posts on online social networking services such as Twitter, Facebook and Google+.
Now, a year after the launch of the Scopus module, we were curious to investigate how research managers and administrators view alternative metrics. This was in the focus of a recent small survey at the yearly conference of the UK Association of Research Managers and Administrators (ARMA) that took place on 6-8 June 2016 at the Birmingham Hilton Metropole.
As part of this conference, Chris James from Elsevier’s Research Metrics team gave a workshop on alternative metrics for research managers and administrators. The workshop concluded with a survey amongst the 14 participants, who represented a cross-section of the research managers and administrators of universities across the United Kingdom.
In this blog post I present some of the results of this survey relating to the use of altmetrics.
Are the 2 Golden Rules useful?
We describe 2 Golden Rules designed to help ensure the responsible and practical use of research metrics in a recent interview on the European Association of Science Editors (EASE) Journal Blog. The Golden Rules are:
- Always use quantitative metric-based input alongside qualitative opinion-based input.
- Ensure that the quantitative, metrics part of your input always relies on at least 2 metrics to prevent bias and encouragement of undesirable behavior.
All participants found the 2 Golden Rules useful.
Social Activity seen as most relevant alternative metric for funding applications
The audience was presented with the scenario of supporting a funding application with alternative metrics and asked to rank the 4 alternative metric categories according to their perceived importance. The participants ranked Social Activity as the most important category, followed by Mass Media, Scholarly Commentary and finally, Scholarly Activity. We interpret this result as highlighting the growing interest in demonstrating the societal impact of scientific research.
Traditional metrics for CVs
Another scenario asked the participants which types of metrics (maximum of 5) they would be more interested in seeing when reading a researcher’s CV.
Here, the top 5 selected were Grant Income (93%), closely followed by Scholarly Output (64%), Outputs in Top Percentiles (57%), International Collaboration (50%) and Field-Weighted Citation Impact (36%).
This was interesting as while there is considerable interest in alternative metrics and their use in demonstrating wider impact and engagement, when presented with this scenario, no alternative metrics were selected in the top 5. We interpret this as a reflection of the caution surrounding these relatively new data sources and metrics and how they can be used practically, especially when compared against the more familiar, embedded metrics that were selected in this question.
It is also important to note that in this question that we did not specify the academic rank of the researcher, the research area, their age, etc., and the results would likely vary for different cases.
While of course this survey only sampled the opinions of a fraction of the community representing one stakeholder group, it represents an interesting cross-section of opinions from a community of experts in the field. Overall, the survey positioned the newer metrics as being used in a complementary way to the well-known metrics, not as a replacement for them, which reflects the approach that we have heard from the wider global community and that we advocate.