Opportunities & responsibilities

This is a guest post contributed by Euan Adie, founder of Altmetric.com.

It’s easy to talk about new data sources, new research and new use cases for altmetrics – and that’s a good thing. It’s great to see how things have developed since the altmetrics manifesto itself was written in 2011, or even since 1:AM was held back in 2014.

There’s a lot of exciting stuff happening and interesting data available. We’ve got to continue developing new datasets, tools and services and we’ll hear a lot about many of them at 3:AM. It’d be a shame, though, if we got too carried away by opportunities and forgot about our corresponding responsibilities as data providers and researchers. I think altmetrics are less useful, maybe even dangerous without the right framework to use them in.

Part of that framework is common sense: if you’re going to use these data for assessment, use it alongside qualitative human judgement. I’m a big fan of something that Jason & Heather at ImpactStory have said before, about encouraging use of the data from the bottom up – having researchers pick and choose the data that suits the stories they want to tell in grant applications or P&T packages – rather than top down, where the data is chosen for you to fit a narrative you’re not necessarily aware of.

I’m keen for us to not focus too much on the quantitative in general: beyond a certain point are more metrics useful? Having a basket of metrics to pick from is good. But what if the basket is a hundred, two hundred different variables, each with their own data collection quirks, discipline differences and meanings? It has been gratifying to see much more focus and research on the meaning of different existing altmetrics and on how they can be used.

I’m also keen for us all to keep pushing auditable, transparent, meaningful data. The NISO recommended practice document covers some of this but not really the auditable part which I think is worth talking about. Transparency is about the data collection, and saying “here’s where I got this count from”. Auditability is “look at the individual items that make up the count”.

The Impact Factor (leaving aside its pros and cons) could, I guess, be a transparent source for altmetrics providers – you could link back to Thomson Reuter’s JCR product. Doing so doesn’t make it auditable though: you can’t see which citations are used in the calculation or whether or not they were positive or negative. Equally, nobody can tell you if a Facebook share count of 3 for a paper means that 1 person shared it three times, three people shared it by mistake then all deleted it or two people shared it to criticize it. There’s no auditability. Without it I think it’s very hard to interpret consistently.

Of course sometimes the value of a source outweighs the need for auditability. Many providers (including altmetric.com) show Mendeley readers, even though for privacy and practical data availability reasons you can’t audit exactly who those readers are. That’s fine: let’s not claim auditability though. It just lessens the word when used elsewhere.

Finally and importantly I think the Leiden Manifesto is really important. If you haven’t read it already, go read it. It’s not rocket science but the principles there are built on experience of how previous kinds of metrics and data have been (mis)used and we’d all be foolish to ignore them. Let’s go further though and not just nod and leave them as theoreticals but embrace them as a community, embedding them in the workflows of the tools we’re building and following them in the analyses we produce. Now is the time – when most people are exploring altmetrics for the first time – to push best practice in their use.

Let’s take the amazing opportunity to not just introduce new altmetrics data – which can be genuinely useful for many kinds of researchers – but also to reshape in a small way the norms and practices around research assesment in general.

I’m looking forward to discussing this and lots of other things in Bucharest. See you there!

Leave a Reply