Citation count – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Tue, 01 Jul 2025 11:39:06 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Citation count – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Is high-volume publishing threatening research integrity? https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/ https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/#respond Tue, 01 Jul 2025 11:39:04 +0000 https://thepublicationplan.com/?p=18053

KEY TAKEAWAYS

  • A recent analysis revealed ~20,000 scientific authors publishing impossibly high numbers of articles.
  • High-volume publishing in the pursuit of inflated metrics represents a threat to research integrity.

We have reported previously on the rising numbers of highly prolific scientific authors. Dalmeet Singh Chawla recently highlighted this issue in Chemical & Engineering News, discussing findings that ~20,000 scientists from Stanford’s top 2% list publish an “implausibly high” number of papers. Singh Chawla explored the implications of high-volume publishing on research integrity, as well as potential solutions.

Study findings

The study, published in Accountability in Research, examined the publication patterns of ~200,000 researchers spanning 22 distinct disciplines, from Stanford University’s list of top 2% scientists (based on citation metrics). It found that:

  • around 10% (20,000 scientists) produced an impossibly high volume of publications
  • some scientists published hundreds of studies per year, with hundreds or even thousands of new co-authors
  • approximately 1,000 were early-career scientists with ≤10 years’ academic experience.

Impact on research integrity

Analysis authors, Simone Pilia and Peter Mora, blame the surprising number of hyperprolific authors on a culture that rewards publication quantity through high scores on metrics. They suggest that this not only compromises research quality but leads to some scientists, “particularly the younger ones”, feeling pressured. Pilia and Mora linked the incentive to churn out large quantities of publications with “unethical practices” such as the inclusion of co-authors who have not made adequate contributions to the research. Based on their findings, Pilia and Mora warn that normalising high-volume publishing poses a significant threat to the fundamental academic process.

“Normalising high-volume publishing poses a significant threat to the fundamental academic process.”

A divisive solution?

Pilia and Mora propose adjusting metrics for scientists exceeding publication and co-authorship thresholds. However, according to Singh Chawla, information scientist Ludo Waltman fears that such adjustments would make research evaluation too complex and confusing. He proposes that research assessment should focus less on metrics and more on a wider range of research activities.

The reliability of metrics for research evaluation is an ongoing topic of discussion within the scientific community, and this latest research serves as a reminder for authors to keep research integrity at the heart of their publication decisions.

————————————————–

Do you think high-volume publishing undermines research integrity?

]]>
https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/feed/ 0 18053
Are open science metrics at odds with research assessment reform? https://thepublicationplan.com/2025/06/18/are-open-science-metrics-at-odds-with-research-assessment-reform/ https://thepublicationplan.com/2025/06/18/are-open-science-metrics-at-odds-with-research-assessment-reform/#respond Wed, 18 Jun 2025 11:40:53 +0000 https://thepublicationplan.com/?p=17976

KEY TAKEAWAYS

  • The key goals of reforming research assessment include reduced reliance on counterproductive, citation-based metrics and promotion of open science.
  • New metrics designed to incentivise open science risk undermining initiatives to improve research evaluation.

Wider adoption of open science and reduced reliance on counterproductive, citation-based metrics are both key goals in the push to reform research assessment. However, in an article for Research Professional News, Ulrich Herb argues that flooding the market with open science metrics designed to incentivise researchers undermines the very reforms they are meant to promote.

Incentivising open science

Herb reports that while open science aims to improve transparency, accessibility, and collaboration in research, initiatives have struggled to gain traction with researchers. In a bid to push open science forward, advocates, research institutions, and funders have designed myriad new metrics to incentivise openness, including:

  • counting outputs such as open access publications, preprints, Findable Accessible Interoperable and Reusable (FAIR) datasets, data management plans, replication studies, and pre-registrations
  • measuring attention from downloads, citations, and media coverage
  • analysing social dimensions via collaborations, diversity, and citizen science activities.

New metrics are already the subject of extensive research and development in Europe.

Open science metrics undermine research assessment reform

Herb believes that open science metrics are experimental, fragmented, and lacking standardisation. Their dependence on quantitative measurement conflicts with the key principles of research evaluation reform, which promote qualitative, holistic assessment. Further, because open science metrics are used both to measure behaviour and influence it, they can encourage ‘metric-driven’ activities, such as using multiple data cuts to generate high numbers of FAIR-licensed datasets, or selecting diamond open access in favour of more appropriate journals. Finally, Herb argues, the current lack of clarity around precisely what open metrics are measuring renders them as counterproductive for research assessment as the citation-based metrics they are designed to replace.

“Because open science metrics are used both to measure behaviour and influence it, they can encourage ‘metric-driven’ activities.”

Using open science metrics as a force for good

Herb suggests that, if standardised, open science metrics could promote open science practices. At present, they risk creating a culture of incentivised behaviours that contradict the very ideals of open, fair, and meaningful research evaluation. The task ahead is to ensure that open science involves a genuine shift in how research is assessed.

————————————————–

What do you think – are open science metrics at odds with improving research evaluation?

]]>
https://thepublicationplan.com/2025/06/18/are-open-science-metrics-at-odds-with-research-assessment-reform/feed/ 0 17976
Is there a citation advantage with open access? https://thepublicationplan.com/2021/06/24/is-there-a-citation-advantage-with-open-access/ https://thepublicationplan.com/2021/06/24/is-there-a-citation-advantage-with-open-access/#respond Thu, 24 Jun 2021 13:18:57 +0000 https://thepublicationplan.com/?p=9137

With Plan S coming into effect earlier this year, there has been much discussion regarding the potential benefits and challenges associated with open access. It has been suggested that open access articles are available to a larger audience than those published behind paywalls, leading to increased visibility, readership and impact. Citations are often used to measure these factors. However, a number of studies have failed to reach a consensus on whether an open access citation advantage exists.

Dr Isabel Basson and colleagues aimed to address this question by applying three measures of citation advantage:

  • normalised citation score (NCS) – indicates if an article received the expected number of citations and corrects for subject area and publication year
  • citedness – whether articles were cited by individuals other than the authors within 2 years of publication
  • most frequently cited – the percentage of publications in the most frequently cited 1%, 5% and 10% of articles in each subject area.

The study, published in Scientometrics, used open access labels in the Web of Science (WoS) metadata to identify open access articles published in journals listed in the Directory of Open Access Journals and compared measures of citation advantage with subscription journal articles. Limiting the articles to English-language only to avoid a potentially confounding effect of language, the authors identified over 3.6 million articles of which 87.3% were published in subscription journals and 12.7% were published open access. The proportion of open access versus subscription journal articles varied considerably with individual subject areas.

Basson and colleagues reported results for the three measures of citation advantage:

  • NCS –a relationship between NCS and access status was found in 76 (30%) of the 250 WoS subject areas investigated. An open access citation advantage was only seen in one subject area; in the remaining 75 subject areas subscription journal articles showed a citation advantage.
  • Citedness – a relationship between citedness and access status was seen in fewer than half of subject areas, only 4 of which showed an open access citation advantage.
  • Most frequent cited – the citation advantage favoured subscription journal articles rather than open access journal articles in the majority of subject areas.

Across all measures of citation advantage, only six of the 250 subject areas in WoS were reported to experience an open access citation advantage compared with subscription journal articles.

This study was one of the first to use open access labels in the WoS metadata to investigate citation advantage with access status. Overall, the authors conclude that access status accounts for little of the variability in the number of citations an article receives and suggest that other factors need to be considered when explaining variation in citation.

——————————————————–

Do you find the reported lack of citation advantage with open access surprising?

——————————————————–

]]>
https://thepublicationplan.com/2021/06/24/is-there-a-citation-advantage-with-open-access/feed/ 0 9137
Does sharing health research on social media increase its impact? https://thepublicationplan.com/2020/12/08/does-sharing-health-research-on-social-media-increase-its-impact/ https://thepublicationplan.com/2020/12/08/does-sharing-health-research-on-social-media-increase-its-impact/#respond Tue, 08 Dec 2020 13:03:40 +0000 https://thepublicationplan.com/?p=7740

Social media has become a ubiquitous part of life in the 21st century. In addition to popular platforms such as Twitter and Facebook, other research-oriented websites and apps (eg ResearchGate, Academia, and Mendeley) have increased in use.

Scientific researchers have begun to leverage these tools to further disseminate their research beyond the traditional peer-reviewed journal publication. In a recent article published in the Journal of Medical Internet Research, Prof Marco Bardus and colleagues conducted a systematic review to explore how social media affects the impact of health research.

The team identified 7 impact studies, which assessed the effect of social media on the dissemination of research, and 44 correlational studies, which assessed the relationship between Altmetrics and bibliometrics. While their analyses of the impact studies suggested that article views may have increased with social media activity, citations did not. The authors cautioned, though, that the social media interventions tested were too heterogenous to compare—with intervention duration and intensity ranging widely—making it difficult to draw conclusions.

Of the 44 correlational studies (most discussing Twitter and Mendeley), around half found a strong association between traditional citation-based and social media metrics. However, when limiting the analyses to just the 7 correlational studies of high methodological quality, the association was moderate or non-existent.

Despite their inconclusive findings, the authors recommend that  researchers continue to use social media to disseminate health research. The authors note that social media provides the opportunity to reach different, non-specialised readers, and advise researchers to adapt their work for specific target audiences. Sharing research in this way is likely to become increasingly important as publishers take steps to improve patient accessibility of journal articles and as the use of plain language summaries to share scientific content with the public continues to grow.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Kristian Clausen MPH from Aspire Scientific

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2020/12/08/does-sharing-health-research-on-social-media-increase-its-impact/feed/ 0 7740
Post-production misconduct: an emerging trend in scientific fraud https://thepublicationplan.com/2020/10/22/post-production-misconduct-an-emerging-trend-in-scientific-fraud/ https://thepublicationplan.com/2020/10/22/post-production-misconduct-an-emerging-trend-in-scientific-fraud/#respond Thu, 22 Oct 2020 10:52:26 +0000 https://thepublicationplan.com/?p=7517

Traditionally, research misconduct has largely fallen into three types — fabrication, falsification and plagiarism. But, in the digital age of scientific publishing and with the increasing use of metrics, new forms of scientific manipulation are emerging that do not affect the research within an article but do enhance its impact, so-called ‘post-production misconduct’.

As discussed in a recent essay by Professor Mario Biagioli in the Los Angeles Review of Books, the use of quantitative metrics to measure undefined concepts, such as the ‘impact’ of a paper, has led to individuals gaming the system to their advantage. Professor Biagioli suggests that examples of such approaches may include:

  • citation rings, where colleagues agree to extensively cite each other’s articles, regardless of relevance
  • coercive citations, where peer reviewers and editors ‘encourage’ authors to cite the reviewers’ own research in order to gain a good review
  • creating co-authors from prestigious universities to facilitate publication
  • buying a place on an author byline of an article submitted for publication by a writing company
  • more radically, hacking journal databases and adding your name to the byline of an accepted article.

Professor Biagioli describes how these practices can increase an academic’s citation metric, which can lead to improved career prospects or financial bonuses. In turn, academics with high citations counts feed into other metrics that are used to assess the ‘excellence’ of universities which, Professor Biagioli suggests, are themselves not immune to practices that manipulate the system to their advantage.

The extent of citation manipulation (or citation hacking as it may be called), either through self-citation, citation rings, or coercive citation, was the subject of another article available as a preprint on bioRxiv and summarised in a Nature news article by Richard Van Noorden. The research carried out by Jonathan D. Wren and Constantin Georgescu used an algorithm to analyse the PubMed database to identify unusual citing patterns.

Their findings suggested that around 16% of authors may have engaged in some kind of reference list manipulation.

Given their results, the authors believe that introducing a system to detect and prevent citation hacking may be warranted.

Professor Biagioli highlights that the difference between this type of misconduct and more traditional methods of scientific manipulation is that it is ongoing, continuing long after the research has been published — impact accumulates as citations increase over time. As long as scientists are rewarded on the basis of metrics such as citation counts, there will always be an incentive for citation hacking: the Nature article concludes that, ultimately, it is this system that will need to change.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Alice Wareham PhD, CMPP from Aspire Scientific

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2020/10/22/post-production-misconduct-an-emerging-trend-in-scientific-fraud/feed/ 0 7517