Highlight Presentations

This post was contributed by Rebecca Welzenbach, Michigan Publishing, University of Michigan Library.

Museum as research institution and publisher: architecting analysis of scholarly content online discussions in a natural history museum using altmetrics – Richard Hulser, Natural History Museum of Los Angeles County

Richard Hulser spoke to us about the family of museums and sites that make up the Natural History Museum of Los Angeles County, and about the ways that this institution is using altmetrics to connect with patrons in the digital age via social media, to understand and draw attention to the importance of the museum as a research institution, to improve the museums’ abilities as a scholarly publisher, and to communicate the museum’s reach and impact to key stakeholders, administrators, and decision-makers.

He offered the recent announcement of the California State Dinosaur as an example of the kind of activity that sits at the nexus of these concerns: it appeals to a museum vice president (a “dinosaur guy”), creates an opportunity for substantial public outreach via social media and bringing people into the museum to see the dinosaur, and will also spur new research opportunities.

Making visible the scientific research coming out of the museum is an ongoing challenge–it’s not as obvious as the public exhibitions and education work. Their key challenges include: making obvious the value of research to the institution, increasing the perception of the institution’s value to all, and obtaining measurements to back up statements and reasoning. He shared with us an example of an agenda from a staff meeting of the Research and Collections unit of the museum. A primary goal of these meetings is to highlight recently published research associated with the museum. Until recently, the work of building a bibliography of the museum’s research outputs was done totally manually. It was their foray into tools like Zotero for citation management that led them further to discover altmetrics.

They decided to use Altmetric, which began to make it much easier to identify the attention received by publications in the wild, as well as to track other activity such as interviews on NPR, etc. He noted that the museums Communications office also really likes the tool as it helps them identify individuals (such as active tweeters) who might be future promoters of the museusm.

They faced the usual concerns when adopting this tool:

  • The use of any metrics is often not well received by all researchers
  • Altmetric doesn’t necessarily reflect quality–just popularity/attention
  • Altmetric doesn’t capture all published research–this is especially an issue in fields that the museum is strong in, such as anthropology and archaeology, which tend to use many small, niche scholarly society journals
  • Altmetric doesn’t capture all the online discussions of research projects

Nevertheless, the tool has enabled them to identify which research outputs have had the greatest reach–including some that they weren’t aware of or hadn’t predicted would be popular. Evaluating altmetric feedback is helpful for researchers, who can begin to see which articles are mainly tweeted (the most popular form of engagement captured by altmetric) and which receive other types of attention, and try to adjust their self-promotion accordingly.

He wrapped by by describing the museum’s efforts to improve the accessibility/trackability of their own scholarly publications, Contributions in Science. Although PDFs have been freely available for some time, they have done a lot of work recently to digitize and produce good-quality article-level metadata to make these more discoverable and trackable. Their next step is to consider assigning DOIs or other permanent, stable identifiers that will improve their ability to track the reach of and engagement with their own publications, as well as that of their researchers.

As many institutions consider how to incorporate altmetrics into their usual operating practices, a museum is a really ideal case study of an institution simultaneously looking to reach the public, funders, and the scholarly community.

 

Next-generation altmetrics: responsible metrics and evaluation for open science – Isabella Peters and Judit Bar-Ilan, ZBW Leibiniz Information Centre for Economics

Judit Bar-Ilan and Isabella Peters were part of a group charged in 2015 by the European Union to investigate the opportunities of altmetrics and to produce a report and recommendations for Open Science and altmetrics. The group had to decide what elements mattered and how to measure them, identify or create indicators for those measurements, and test and revise as they went along. Caveats: the act of measuring something necessarily influences the measured process, and “Not everything that can be counted counts, and not everything that counts can be counted.”

The group’s report was completed in early 2017 and is available in full here: tinyurl.com/nextgenmet

The group determined that traditional metrics are insufficient for Open Science, though these metrics are still valuable in some ways (for example, citation metrics can help show the citation advantage of open vs. closed publications). They also identified a number of advantages that almetrics bring to the conversation, such as Increased visibility, expanding our view of what impact looks like (e.g., taking into account exposure to the public), accounting for non-traditional sources (such as news outlets or blogging), altmetric events can be measured/counted, and they occur quickly. They also identified many of the challenges with altmetrics: completeness of coverage, lack of transparency, validity, disciplinary differences, the risk of gaming, and the need for acceptance by research community and decision makers

The group ultimately made five “headline findings,” with 12 targeted recommendations that aligned with four of the headings of the the European Open Science Agenda. In general, the headline findings were at a very high level, recommending that metrics in an open science environment calls for a robust mix of of quantitative and qualitative measures, thoughtfully developed and responsibly used.

Key recommendations included:

  • In an open science environment, metrics also need to be open–standardized, transparent, interoperable, and freely available.
  • There is an opportunity in the open science environment to define new metrics that incentivize/reward/recognize openness
  • We are producing and more data. We need to think about the role of metrics both in drawing attention to research, but also in “forgetting” the things that we don’t need or shouldn’t keep–the need for indicators to “organize oblivion.”

The “big takeaway” for me from this talk was the focus on the need to thoughtfully decide *what* to measure, and to *decide* what indicators we want to use or create–don’t just rush into measuring what you see, but rather decide what’s valuable, and then find ways to measure it. This aligns, in my mind, with some of the goals of the HUMetrics HSS presentation from Wednesday.

 

Using altmetrics to understand the research landscape – Chris Manuel, Canadian Institutes of Health Research

Chris Manuel is a corporate performance analyst at the Canadian Institutes of Health Research (CIHR). With the caveat that this strand of work is not the only way CIHR measure their impact, he walked us through the extensive process that CIHR uses to track their funding dollars all the way through to behavioral changes in health science practice.

This process occurs in several steps:

  • CIHR funds researchers
  • Research happens
  • Results are published in knowledge products (KPs), acknowledging CIHR
  • Research results are cited
  • Results influence decision making
  • Decision making documents influence behavior
  • Behavior changes result in improvements in health, health systems, or society

His analysis focuses on the uptake of research published in Knowledge Products by Downstream Documents (DDs) such as policy documents, reports, and recommendations. How do they measure this?

Acknowledging the incompleteness of this record, CIHR uses Web of Science data to identify publications that acknowledge CIHR. These are the publications they will be able to track through to their impact on behavior. Then, they do systematic, often very manual harvesting of downstream documents such as policy documents, patents, etc. They match citations in these documents to the CIHR funded research KPs in WoS and assess whether the influence of the KP on the DD is weak, moderate, or strong, based on how important the KP is to the outcomes of the DD, and how many times the KP is cited in the DD.

This allows the CIHR to report out on indicators of its own impact, for example, its contributions to:  

  • Capacity building (for example: how many people have received funding, how many authors per Knowledge Product, how many female and early career researchers per KP)
  • Advancing knowledge (for example: the usual citation metrics for KPs)
  • Informing Decision-making (for example: the number of of DD influences, observable influence beyond academia, number of years between funding to observable influence outside of academia, percentage of federal health department documents influenced by CIHR supported research)
  • Health systems and other Economic and Social impacts (for example, the number of patents influenced; the top KPs with worldwide influence)

This is a really high level summary of an extremely detailed process that has generated a huge amount of data. The major question from the audience was, will CIHR comply with its own OA policy and make this data openly available to everyone? This is a work in progress and remains to be seen!