2:AM Altmetrics & Research Evaluation
13:05 – 13:50: Altmetrics & Research Evaluation
Chair: Erik van Aert, NWO
Thed van Leeuwen, Rodrigo Costas, & Clifford Tatum (CWTS – Leiden University) – Slowing the pace on applying metric techniques on Open Science
Traditionally, advanced bibliometrics have been the ‘gold standard’ in research evaluations in many fields. Due to changes in communication patterns in various fields, we now see alternative ways of assessing research appearing on the landscape. One of the major developments in scientific communication patterns is the advent of the Openness movement, through which various activities in academic life become more democratic, transparent, and hopefully fairer. This stretches out to publishing and the costs involved, how data are shared, and how peer review is organized, to name some instances in which the issue of Openness is raised. Of a somewhat more recent nature is the way assessment of scholarly activity is organized, in particular with respect to the way the various audiences with whom scholars are communicating are considered. A new way of looking at research assessment is through the recent ‘alternative metrics’ or also referred to as Altmetrics.
More classical bibliometrics are under pressure, due to international (DORA-Declaration) and national debates and initiatives (SiT) related to the organization of research assessment in various layers of the science system. This stirs a re-focus from science policy towards alternative ways to assess research performance. In this presentation we will show, by a recent example, how careful we have to be in making choices for metrics in order to support research assessment practices as well as science policy decision making.
Eppo Bruins & Rens Vandeberg (STW / NWO) – Innovation indicators: Why the future is so hard to predict
In science and innovation, quality indicators are usually based on counting uniform and globally applicable parameters. Whereas this completely ignores the pluriform character of reality, any differentiated and more nuanced approach has usually led to a zoo of deliverables, hampering comparison and serious evaluation of actions. We present an impact-driven approach, the 4D-model for valorisation, which overcomes these shortcomings and which helps to get the ‘real talk on the table’. We also present the first application of this approach for the evaluation of innovation programs and funding instruments.
Edwin Horlings, Rathenau Institute – The power and appeal of new metrics: can we use Altmetrics for evaluation in science?
With the rise of the information society, science has moved into a global online environment, which has spawned new forms of communication, new channels and a cultural shift towards openness. Online communications produce new metrics that can be harvested, aggregated, and compared with other, existing metrics. Altmetrics have enriched and diversified our data on science and provide alternatives to measuring scholarly and societal impact. As a data source for science, Altmetrics can already be used and there are countless examples of good empirical studies. But for Altmetrics indicators to be used in formal evaluations, the quality standards must go up.
I first discuss the main properties of indicators for formal evaluations. Are Altmetrics indicators reliable and unbiased, are they valid, can we scale from individuals to groups to institutes to nations and can we compare them? We should also understand the behavioural mechanisms that produce these metrics. I believe that at the moment Altmetrics is not suitable for use in formal evaluations. However, in the late 1950s the same was true for the citation indexes that have become the foundation of evaluative bibliometrics. I explore what needs to be done by looking at efforts to introduce a wider set of metrics for general well-being as a supplement to Gross Domestic Product. I end with a reflection.The spirit of Altmetrics is revolutionary: look beyond conventional indicators and think more broadly about impact and involvement. In that same spirit, you might consider not using Altmetrics in evaluation: being able to measure everything does not make you happier or more intelligent. Or you might consider broadening the scope of Altmetrics to include all the functions of university simultaneously: education, research, and valorisation.
Peter van den Besselaar (VU University Amsterdam) – What is altmetrics and how to proceed with it?
Bibliometrics was driven by availability of data: Web of Science, and the data enabled but also limited the kind of indicators for scholarly performance. Most indicators are variants of publication and citation counts, and have as we know many limitations. Altmetrics follows to some extent a similar development. New data are deployed to develop new indicators. However, I will argue that one needs to understand the dynamics of the science system in order to develop adequate indicators.
In this presentation I will first discuss the limits of publication and citation counts, as well as of peer review. Then I discus a few examples of Altmetrics: Indicator for societal impact of research-infrastructures, indicators for independence of researchers, and indicators for the performance of research groups. What can we learn from these examples? The issue is not to replace not so good indicators by peer review as is increasingly argued, but to develop theory based smart Altmetrics to support selection and decision-making. The examples may show how that can work.