Citation manipulation – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Thu, 17 Apr 2025 07:36:13 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Citation manipulation – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Academic metrics unchained: pursuing authentic impact over gamified scores https://thepublicationplan.com/2025/04/17/academic-metrics-unchained-pursuing-authentic-impact-over-gamified-scores/ https://thepublicationplan.com/2025/04/17/academic-metrics-unchained-pursuing-authentic-impact-over-gamified-scores/#respond Thu, 17 Apr 2025 07:21:56 +0000 https://thepublicationplan.com/?p=17493

KEY TAKEAWAYS

  • Current academic metrics can be manipulated, leading to unethical practices, such as self-citation and citation cartels.
  • The addition of clinical guidelines and Bluesky tracking to the Altmetric Attention Score could help drive a cultural shift toward recognising genuine forms of research impact.

A recent article by Dan K Pearson on the London School of Economics’ Impact of Social Sciences blog sheds light on the growing concern over the gamification of academic metrics. Pearson highlights how the pressure to publish in high-impact journals has led to unethical practices, such as excessive self-citation, citation cartels, and even the emergence of a citation black market where researchers can purchase citations to boost their profiles.

The pressure to publish creates an environment where researchers focus on quantifiable outputs rather than the actual outcomes and societal impacts of their work.

This environment encourages researchers to focus on quantifiable outputs rather than the actual outcomes and societal impacts of their work. Pearson argues that this system not only undermines the integrity of academic research but also discourages public engagement and collaboration. He suggests a cultural shift is needed whereby impact-oriented activities, such as public outreach, are emphasised over citation counts.

Two recent developments to the Altmetric platform could address this need by better reflecting the real-world applications of scholarly work. As reported by Research Information, Altmetric now tracks citations in clinical guidelines, providing insights into how research informs clinical practice and patient care. This addition allows researchers and institutions to assess the practical applications of medical research, thereby informing funding decisions and publication strategies.

Altmetric has also expanded its tracking to incorporate Bluesky, a social media platform favoured by the research community. This inclusion offers a more comprehensive view of research conversations, helping to understand the broader engagement and influence of research.

By embracing these developments and promoting wider recognition of scholarly contributions, the academic community can move beyond the pitfalls of gamified metrics and toward a more authentic assessment of research impact.

————————————————–

Do you believe alternative metrics (altmetrics) provide a better assessment of research impact?

]]>
https://thepublicationplan.com/2025/04/17/academic-metrics-unchained-pursuing-authentic-impact-over-gamified-scores/feed/ 0 17493
Citation manipulation: a new wave of metrics ‘gaming’? https://thepublicationplan.com/2024/10/22/citation-manipulation-a-new-wave-of-metrics-gaming/ https://thepublicationplan.com/2024/10/22/citation-manipulation-a-new-wave-of-metrics-gaming/#respond Tue, 22 Oct 2024 10:31:59 +0000 https://thepublicationplan.com/?p=16665

KEY TAKEAWAYS

  • The increasing trend of artificially boosting citation metrics is an illegitimate, yet lucrative business.
  • Several research teams are striving to identify and prevent these fraudulent practices, but the methods used are becoming alarmingly sophisticated.

Fraudulent publication tactics, from fake data to paper mills, pose a significant threat to the integrity of academic research. Now, increasing rates of citation manipulation are the latest trend to spark concerns among researchers. In a recent Nature News article, Dalmeet Singh Chawla looks at the scale of the threat and efforts to expose unscrupulous practices.

Spot the red flags

Computer scientist Yasir Zaki is among those at the forefront of investigations. As he explained to Singh Chawla, there are some key warning signs to look out for if citation manipulation is suspected:

  • a steep rise in citations shortly after publication
  • citations deriving from limited sources
  • a sudden, large increase in citations.

The scale of the problem

While in the past, citation manipulation was a more ‘low-tech’ practice, with ‘citation rings’ citing each other’s work, we are presented with a very different picture today. Zaki’s team ran an undercover operation exposing a black market industry that sells citations via paper mills. Further work by the group found that fake preprints were a key method used to artificially bolster citation counts. In another demonstration of how easy ‘citation gaming’ has become in the digital age, a different group were able to list papers on Google Scholar that had been ‘authored’ by a cat and then cite these in fake papers they posted on ResearchGate.

Solutions in sight?

While the scale of this industry is sobering, Singh Chawla shone a light on efforts to tackle the issue:

  • A tool by Guillaume Cabanac (University of Toulouse) detects unusual phrasing indicative of fake research papers. Cabanac reports that many of these papers also contain suspicious citations.
  • Cyril Labbé’s group (Grenoble Alpes University) is developing a tool to flag unusual citation patterns.
  • Zaki’s team suggest a new metric (the citation-concentration index) that identifies authors with citations derived from limited sources.

However, as fraudsters come up with new and nuanced ways to ‘game’ the system, the scientific community must remain vigilant.

————————————————–

What do you think – can tech keep one step ahead of fraudulent methods to manipulate citation counts?

]]>
https://thepublicationplan.com/2024/10/22/citation-manipulation-a-new-wave-of-metrics-gaming/feed/ 0 16665
The persistence of journal hijacking and how to fight back https://thepublicationplan.com/2024/10/17/the-persistence-of-journal-hijacking-and-how-to-fight-back/ https://thepublicationplan.com/2024/10/17/the-persistence-of-journal-hijacking-and-how-to-fight-back/#respond Thu, 17 Oct 2024 15:11:23 +0000 https://thepublicationplan.com/?p=16643

KEY TAKEAWAYS

  • Hijacked journals imitate legitimate publications, misleading researchers into paying for non-peer-reviewed work.
  • To combat hijacking, publishers should secure their websites and regularly check the accuracy of online listings. Researchers can use tools like Think Check Submit and Retraction Watch’s Hijacked Journal Checker to confirm journal authenticity.

Journal hijacking, in which fraudulent websites impersonate legitimate journals, is an ongoing threat to academic publishing. Hijacked journals deceive researchers into paying fees to publish work that is not peer reviewed, risking reputational damage for both the researcher and the legitimate journal that has been targeted. But what, if anything, can be done? A recent Nature Index article by Jackson Ryan delved into the issues.

The scale of the problem

Economist Anna Abalkina has tracked more than 250 hijacking cases for Retraction Watch over the past 4 years. The tactics used by hijackers include:

  • taking over expired domain names to create fake journal websites
  • using fake URLs that closely mimic legitimate journals
  • creating a website to steal the identity of a journal that lacks an online presence.

Victims across academic publishing

Ryan describes the impact of hijacking on researchers and publishers alike. While duped researchers can see their academic reputation and careers damaged, publishers that fall victim to hijacking can be forced to expend large amounts of time trying to correct the record. The journal International Development Planning Review (IDPR) was hijacked in late 2023. Despite the publisher’s efforts to have the fake site delisted by Google, it remained in top search results for months. In another case, the Scandinavian Journal of Information Systems was hijacked by hackers altering its URL on Scopus. By the time the URL was corrected, hundreds of fake articles had been published under the journal’s name. A common theme among affected publishers is frustration with slow responses from tech companies and online platforms.

A common theme among affected publishers is frustration with slow responses from tech companies and online platforms.

What can we do?

Ryan outlines the debate among experts as to whether much can be done to prevent journal hijacking. Dr Dan Hammett, co-editor of IDPR, is sceptical that it can ever be avoided. However, he and others believe that implementing stronger security measures could significantly reduce the risk.  Publishers are urged to strengthen website security, register alternative domains themselves to prevent hijackers from exploiting them, and regularly check journal listings on search engines and platforms such as Scopus. Meanwhile, researchers are encouraged to use tools like Think Check Submit and The Retraction Watch Hijacked Journal Checker to confirm journal authenticity.

————————————————–

What do you think – would implementing stronger security measures in academic publishing help reduce journal hijacking?

]]>
https://thepublicationplan.com/2024/10/17/the-persistence-of-journal-hijacking-and-how-to-fight-back/feed/ 0 16643
ChatGPT: the newest author of scientific research? https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/ https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/#respond Thu, 16 Nov 2023 13:44:00 +0000 https://thepublicationplan.com/?p=14487

KEY TAKEAWAY

  • A ‘ChatGPT-authored’ scientific paper highlights the promise and pitfalls of using AI in research and publications.

Use of artificial intelligence (AI) in scientific publishing seems inevitable. While the full capabilities of this fast-changing technology are yet to be determined, some in medical publishing have begun to explore ways to harness the potential of generative AI, while others urge caution and lament a lack of structured guidance. Recently, as reported by Gemma Conroy in Nature News, Professor Roy Kishony and his student, Tal Ifargan, provided new fuel for the debate, by asking ChatGPT to conduct research and write a paper from scratch.

Kishony and Ifargan used a ‘data to paper’ system, in which software acted as a ‘go between’ between humans and generative AI. This system automatically prompted ChatGPT to follow the steps of scientific research, from hypothesis generation to development of a scientific manuscript. In less than an hour, ChatGPT developed a study objective; wrote code to analyse a large, publicly available dataset; and drew conclusions based on its findings and existing literature, which it reported in a 19-page research article.

The study highlighted some promising aspects of incorporating AI into research and publication pathways, namely reduced timelines and the potential to quickly generate written summaries. However, it also shone a light on a number of limitations and risks:

  • False narratives: In this case, ChatGPT claimed to ‘address a gap in the literature’, although the subject (a link between diabetes risk and diet and exercise) was already well investigated.
  • Decrease in research quality: Kishony flagged the risks of generative AI leading to ‘p hacking’ or a flood of low-quality research papers.
  • Incapable of self-correction: Stephen Heard of Scientist Sees Squirrel also provided commentary and analysis on the limitations thrown up by the study, including generative AI’s lack of accuracy. Expert human intervention was required throughout, to spot and correct errors.
  • Regurgitating existing ideas: Heard also emphasised that generative AI creates content based on existing source material, thus perpetuating biases and reducing innovation and creativity.
  • Hallucinations: As explained by Jie Yee Ong in The Chainsaw, ‘hallucinations’ are a well-known problem with generative AI. This study was no exception, with ChatGPT generating fake citations despite access to the published literature. As Ong puts it, “for now, it is best not to treat everything ChatGPT spits out as gospel”.

Kishony and Ifargan’s carefully planned study allowed generative AI’s work to be checked for accuracy by human experts. Researchers agree that these human checks and balances remain essential to ensuring the credibility of scientific research and publications in which AI plays a role.

Researchers agree that human checks and balances remain essential to ensuring the credibility of scientific research and publications in which AI plays a role.

————————————————–

What do you think will be the biggest impact of using AI in the publication of scientific research?

]]>
https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/feed/ 0 14487
Post-production misconduct: an emerging trend in scientific fraud https://thepublicationplan.com/2020/10/22/post-production-misconduct-an-emerging-trend-in-scientific-fraud/ https://thepublicationplan.com/2020/10/22/post-production-misconduct-an-emerging-trend-in-scientific-fraud/#respond Thu, 22 Oct 2020 10:52:26 +0000 https://thepublicationplan.com/?p=7517

Traditionally, research misconduct has largely fallen into three types — fabrication, falsification and plagiarism. But, in the digital age of scientific publishing and with the increasing use of metrics, new forms of scientific manipulation are emerging that do not affect the research within an article but do enhance its impact, so-called ‘post-production misconduct’.

As discussed in a recent essay by Professor Mario Biagioli in the Los Angeles Review of Books, the use of quantitative metrics to measure undefined concepts, such as the ‘impact’ of a paper, has led to individuals gaming the system to their advantage. Professor Biagioli suggests that examples of such approaches may include:

  • citation rings, where colleagues agree to extensively cite each other’s articles, regardless of relevance
  • coercive citations, where peer reviewers and editors ‘encourage’ authors to cite the reviewers’ own research in order to gain a good review
  • creating co-authors from prestigious universities to facilitate publication
  • buying a place on an author byline of an article submitted for publication by a writing company
  • more radically, hacking journal databases and adding your name to the byline of an accepted article.

Professor Biagioli describes how these practices can increase an academic’s citation metric, which can lead to improved career prospects or financial bonuses. In turn, academics with high citations counts feed into other metrics that are used to assess the ‘excellence’ of universities which, Professor Biagioli suggests, are themselves not immune to practices that manipulate the system to their advantage.

The extent of citation manipulation (or citation hacking as it may be called), either through self-citation, citation rings, or coercive citation, was the subject of another article available as a preprint on bioRxiv and summarised in a Nature news article by Richard Van Noorden. The research carried out by Jonathan D. Wren and Constantin Georgescu used an algorithm to analyse the PubMed database to identify unusual citing patterns.

Their findings suggested that around 16% of authors may have engaged in some kind of reference list manipulation.

Given their results, the authors believe that introducing a system to detect and prevent citation hacking may be warranted.

Professor Biagioli highlights that the difference between this type of misconduct and more traditional methods of scientific manipulation is that it is ongoing, continuing long after the research has been published — impact accumulates as citations increase over time. As long as scientists are rewarded on the basis of metrics such as citation counts, there will always be an incentive for citation hacking: the Nature article concludes that, ultimately, it is this system that will need to change.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Alice Wareham PhD, CMPP from Aspire Scientific

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2020/10/22/post-production-misconduct-an-emerging-trend-in-scientific-fraud/feed/ 0 7517