The Twitter Effect

The session started with a presentation from Brett Butliere, who asked: Are more tweeted papers and topics also more contradicted? Brett and his collaborators had conducted a study of over 160,000 tweets about research outputs to determine whether or not research that promoted controversy resulted in more attention – a trend that he noted could potentially be seen in the Altmetric Top 100 lists every year. Amongst the topics most often Tweeted about were things like the dinosaurs and obesity – certainly areas that many people have opinions on. The research team concluded that there was some statistical evidence of their hypothesis, but noted that more research would need to be done and that different methods could also be explored.

Next up was Zohreh Zahedi, who was presenting her research that looks at the imbalanced use of social media across different countries. Zohreh conducted a cross-country analysis of Twitter uses to see whether tweeting about research was ore prevalent in certain countries, and the extent to which people tweeted about authors with affiliations in the same country as the tweeter themselves. Zohreh found some fairly significant differences: although an increasing amount of research comes from China, Russia, South America and India, the majority of twitter activity still takes place in the US and UK, and focuses on research from those countries. She summarized by highlighting that we need to be careful to bear these differences in mid when considering what constitutes ‘high’ vs. ‘low’ amounts of attention, and questioned whether we are in danger of creating an ‘altmetrics divide’.

Fereshteh Didegah then shared some really interesting results from a study which looked at the quality of interactions and engagement around research articles on Twitter. Using 250 articles and their 8,000 associated Tweets, Fereshteh and her team classified engagement into 3 categories: dissemination, consultation and evaluation.

 

The research team also explored issues of false popularity, including harassment and clone accounts, in detail – trying to understand some of the causes behind them and the resulting effect. Overall, the research found that there are a broad array of people involved in discussions about science on Twitter – and that the majority of it focuses on simply sharing rather than adding any meaningful commentary.

‘Making piracy fun’ became the (perhaps unintended) tag-line of the next talk, where Tim Bowman discussed the results that came from analyzing the use of the #icanhazpdf hashtag. The study aimed to understand what percentage of tweets were requesting documents that were behind a paywall, and what other conversations were taking place around that. By pulling in and matching data from a variety of sources, Tim was able to identify some interesting interactions – including a relatively substantial number of people requesting papers that were in fact already open access. Further digging into the data revealed people using the hashtag to advertise their access to full text content as a service to others, and many librarians seeking to find alternative ways to deliver to the needs of the researchers. Use of the hasthag has become so widespread, Tim noted, that social ‘norms’ have emerged – such as deleting the request Tweet as soon as the paper had been received. These norms provide a ‘frame’ for people to normalize their subversive sharing activity.

Last up was Rodrigo Costas, who has been doing some really exciting work to improve the way we identify scholars on Twitter. Rodrigo matched article records from Web of Science with attention data from Altmetric to identify researcher Twitter accounts (pointing out the limitations of this along the way – there are many researchers on Twitter who have never shared a paper and therefore do not appear in the Altmetric database) and then cross-referencing those accounts with ORCID records to validate the identify of each Tweeter. Rodrigo found over 387,000 scholars with a Twitter account, and was confident of a 94% accuracy amongst those based on the ORCID validation. What makes this research so exciting, Rodrigo posed, is not necessarily what has been done so far but the potential to expand from here: if we are able to better identify researchers in social spaces then we can start to look in more depth at not just how they share papers, but how they communicate and engage on social networks in general, and what that might mean for their field.

This was a brilliant session brimming with insights and ideas for further investigation – do get in touch with the authors if you have more to discuss (via Twitter, of course ;))