KEY TAKEAWAYS
- The key goals of reforming research assessment include reduced reliance on counterproductive, citation-based metrics and promotion of open science.
- New metrics designed to incentivise open science risk undermining initiatives to improve research evaluation.

Wider adoption of open science and reduced reliance on counterproductive, citation-based metrics are both key goals in the push to reform research assessment. However, in an article for Research Professional News, Ulrich Herb argues that flooding the market with open science metrics designed to incentivise researchers undermines the very reforms they are meant to promote.
Incentivising open science
Herb reports that while open science aims to improve transparency, accessibility, and collaboration in research, initiatives have struggled to gain traction with researchers. In a bid to push open science forward, advocates, research institutions, and funders have designed myriad new metrics to incentivise openness, including:
- counting outputs such as open access publications, preprints, Findable Accessible Interoperable and Reusable (FAIR) datasets, data management plans, replication studies, and pre-registrations
- measuring attention from downloads, citations, and media coverage
- analysing social dimensions via collaborations, diversity, and citizen science activities.
New metrics are already the subject of extensive research and development in Europe.
Open science metrics undermine research assessment reform
Herb believes that open science metrics are experimental, fragmented, and lacking standardisation. Their dependence on quantitative measurement conflicts with the key principles of research evaluation reform, which promote qualitative, holistic assessment. Further, because open science metrics are used both to measure behaviour and influence it, they can encourage ‘metric-driven’ activities, such as using multiple data cuts to generate high numbers of FAIR-licensed datasets, or selecting diamond open access in favour of more appropriate journals. Finally, Herb argues, the current lack of clarity around precisely what open metrics are measuring renders them as counterproductive for research assessment as the citation-based metrics they are designed to replace.
“Because open science metrics are used both to measure behaviour and influence it, they can encourage ‘metric-driven’ activities.”
Using open science metrics as a force for good
Herb suggests that, if standardised, open science metrics could promote open science practices. At present, they risk creating a culture of incentivised behaviours that contradict the very ideals of open, fair, and meaningful research evaluation. The task ahead is to ensure that open science involves a genuine shift in how research is assessed.
————————————————–
Categories
