NISO – update on standards
Graham has been a Marketer at Wiley for 13 years, having spent the majority of those years working closely with the Engineering, Statistics, Materials and Physics communities in the Global Research Division. Since 2013 Graham has moved into a new role in the Author Marketing team with the aim of providing new services and products across the Global Research Community. Within this remit there has been the opportunity to experiment, develop and roll out a number of initiatives that aim to add value to the overall author experience.
Todd Carpenter, Executive Direct at NISO, opened the session by posing the question: are we all measuring in miles or kilometres? A great opening gambit. NISO confirmed their credentials in the space with work in Altmetrics dating back to 2010, and Todd spoke of how they are working to build trust in metrics through standards.
Todd went on to outline an infrastructure for metric assessment, emphasising that we should define it upfront – get everyone on the same page and agree about what each term means. We then need to know what identifiers (ORCID, etc.) are required.
- How granular will we need data to go – will it be article, journal, collection?
- How long do we measure for? Some measures take longer to meaningfully show up than others.
- Do we have consistency across numerous metric providers? Unfortunately the answer is NO, when a detailed analysis is undertaken.
Essentially the key is Standards, Standards, Standards. If you get it right there should be some “trust” in the measures that are being reported, wherever they are being reported.
Having now defined what is a three year project, Todd was keen to get things moving on that project as soon as possible. What immediately followed were three meetings and some one on one meetings to explore the space and understand the various perspectives. Check out the NISO white paper to get a deeper understanding – this runs to nine major themes across twenty five connected projects.
What comes out as those major themes? Definitions; Applications to types of research outputs (can we measure across data as well as the article?); Discovery implications; Research Evaluation; Data quality and gaming; Grouping, aggregating and granularity; Context; Adoption and promotion (the issue is not one of consensus to the standard, but in getting people to adopt it).
In mid-2014 these results were presented and the next steps are to identify a core of three to five projects to focus on, leading to recommended standards being published in early 2016.
The priorities are now: Definitions; Persistent identifiers; Improve data quality and normalisation across numerous providers; Identify research output types that are most applicable to the use of metrics; Standardization of APIs or download exchange; Audit process for data reproducibility.