3:AM – Evaluating research using altmetrics

This post is kindly contributed by Lauren Collister, a Sociolinguistics PhD at the University of Pittsburgh.  

The “Evaluating research using altmetrics” session began with Kim Holmberg, who presented his work on Altmetrics and research profiles for ten universities in Finland. His main question was on the distribution of different kinds of metrics based on the University and the field of study. He and his team collected information for the ten universities in Finland with the most number of what he called “altmetric events”. (Special note: I had never encountered the term “altmetric event” before this presentation, and afterwards I heard it a lot. I wonder if this is a commonly-used term or if everybody at the conference picked it up from Holmberg’s presentation.)

The team separated the data into four broad categories of academic fields: 1.) medical and health sciences, 2.) natural sciences, 3.) social sciences and humanities, and 4.) agricultural studies, engineering, and technology. They found that the ratio of altmetric events in each field sometimes correlated with an individual university’s output of documents in these fields and sometimes not — for example, some universities that did not even have a Medicine department had a lot of altmetric events for papers in the medical and health sciences field, presumably for scholars who did a cross-disciplinary project. They also found that some kinds of altmetrics favor certain fields; one example given was that medical and health research was disproportionately represented in Twitter mentions of scholarship compared to the other fields.

_____

Second, we heard about the work of Beatriz Benitez Juan and Consol Garcia Gomez on The scholarly record and its impact. One striking moment of the presentation was when the presenter asked the audience this question: How many of you have the complete scholarly record for even one researcher at your university?

The talk had the audience considering what makes up a scholarly record, especially in our evolving research context. Beyond the traditional product of scholarship – an article or a book – we have pre-publication work like methodology documents, grant proposals, electronic lab notebooks; and then there are the post-publication parts of the scholarly record like comments, post-publication peer review, and revisions. What can we collect, and what should we collect? How can these different parts of the scholarly record be measured? This talk prepared us for both the next question that we tackled in this session as well as later sessions on altmetrics for data and software packages.

_____

slack-for-ios-upload-8Third in the session was Josiline Chigwada, who described her project to understand the Use of altmetric data in allocating research grants in research institutions. She studied the grant approval process at 63 institutions in Zimbabwe, including both academic institutions and research institutions. She used survey methodology as well as interviews. With our minds on the scholarly record from the previous talk, we may have been surprised to hear that she found that only one institution used altmetric tools at all in making these decisions. The majority of the grant approval processes involved either administrative decisions made in meetings where the topic of the grant was discussed, or performance review information that included only how many publications the investigators had. One institution did use altmetrics after the grant approval process to understand the impact of the research funded by the grant. Chigwada’s conclusion was that we have a lot of work to do; most people in Zimbabwe were not aware of altmetrics or about the kind of helpful information they could provide about potential studies. She also wants to expand her understanding of the grant process and altmetrics in and is continuing the survey in international contexts, so if you are interested, the survey can be found here: http://surveymonkey.com/r/F9W7MRB

_____

The session on evaluating altmetrics closed with Jason Priem presenting on Investigating metrics at the researcher level. Priem wanted to know about what kind of information researcher altmetrics profiles can reveal, including the different types of altmetrics activity that individual researchers may be focused on. They gathered data from ORCID profiles that had more than 20 altmetric events and investigated questions like 1.) in how many channels do researchers get altmetric events? (Answer: generally between 3 and 7), and 2.) do weighted scores like the Altmetric.com score relate to the aggregate total number of altmetric events? (Answer: yes, there is a fairly good correlation; if you have a high weighted score, you generally also have a lot of altmetric events in general.)

I only summarize this talk briefly, but you can read the entire talk and see the data and graphs at the project’s Github page. In fact, Priem invited anyone interested to play with the data and let him know what you discover. http://github.com/impactstory/research-level-altmetrics

Overall, this session left us thinking about what we can measure, how we can do so, and what the different approaches can tell us about the university, the field, and the researcher.