Retraction Watch – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 29 Oct 2025 15:38:20 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Retraction Watch – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Safeguarding scientific image quality and integrity: what more can be done? https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/ https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/#respond Wed, 29 Oct 2025 15:38:19 +0000 https://thepublicationplan.com/?p=18377

KEY TAKEAWAYS

  • Scientific image editing serves a vital role in clear communication, but seeking presentation clarity must not compromise data integrity.
  • Combatting image manipulation requires systematic collaboration across the research ecosystem, including standardised guidelines and new verification technologies.

As concerns mount over image manipulation in scientific publishing, the research community has begun developing new strategies to balance visual clarity with data integrity. Writing in Nature, Sara Reardon explores the “fine line between clarifying and manipulating”, highlighting the challenge of making figures both accessible and faithful to original data.

The art and science of visual presentation

Scientific images often require editing for clarity, like adjusting brightness, adding scale bars, or enhancing contrast. While such modifications are essential for effective scientific communication, a 2021 study by Helena Jambor and colleagues revealed that poorly presented figures remain surprisingly common, suggesting researchers need better training in visual data presentation.

When enhancement becomes manipulation

The boundary between legitimate clarification and misconduct can be perilously thin. Science integrity consultant Elisabeth Bik warns that even minor edits – such as cloning image sections to cover dust particles – can undermine data credibility. Echoing a seminal 2004 article, Bik emphasises that “the images are the data”, meaning they should present the results actually observed rather than those the researchers expected. Any undisclosed alteration that changes the scientific message could constitute misconduct. As Reardon notes, the cardinal rule remains to “show your work” – enhancing clarity without obscuring underlying data.

“The boundary between legitimate clarification and misconduct can be perilously thin… the cardinal rule remains to ‘show your work’ – enhancing clarity without obscuring underlying data.”

Detection and prevention strategies

Phill Jones examines potential systemic solutions to what Bik calls science’s “nasty Photoshop problem” in The Scholarly Kitchen. Journals increasingly conduct pre-publication screening using image-integrity specialists or AI tools that have demonstrated substantial promise in identifying manipulated images. Guidelines such as those from the International Association of Scientific, Technical & Medical Publishers aim to standardise best practice, while individual journals are also establishing specific image integrity requirements. Beyond journals:

  • Institutions are urged to provide training and embed image integrity expectations into research culture.
  • Post-publication peer-review platforms also play a role in identifying problematic images after publication.

Looking ahead, technical innovations offer promise. Jones highlights developments such as encrypted hashes and digital ‘signatures’ embedded in images, akin to secure web certificates, that could enable reliable verification of image authenticity. Ongoing collaboration and systematic change across the research ecosystem will be required to ensure scientific images are both clear and credible.

—————————————————

Are current image integrity detection tools sufficient to prevent manipulation in scientific publishing?

]]>
https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/feed/ 0 18377
Retractions and corrections are falling under the radar: should open repositories step up? https://thepublicationplan.com/2025/08/06/retractions-and-corrections-are-falling-under-the-radar-should-open-repositories-step-up/ https://thepublicationplan.com/2025/08/06/retractions-and-corrections-are-falling-under-the-radar-should-open-repositories-step-up/#respond Wed, 06 Aug 2025 08:53:35 +0000 https://thepublicationplan.com/?p=18175

KEY TAKEAWAYS

  • Most open access repositories have evolved without sufficient means to communicate corrections or retractions.
  • Metadata, such as DOIs, could be used to link all article versions and ensure corrections/retractions are clearly indicated to readers.

Open access repositories have an important role in disseminating scientific research. But what happens when a journal corrects or retracts a publication? A recent LSE Impact Blog article describes Frédérique Bordignon’s alarming discovery around how well this is captured by repositories.

Open repositories’ ‘blind spot’ to corrections and retractions

As Bordignon explains, most journals display up-to-date editorial notices alongside publications, although clarity can vary. On the other hand, open repositories do not necessarily pull through information on correction/retraction from published counterparts, and guidance from the Confederation of Open Access Repositories is lacking.

To examine the topic further, Bordignon’s team conducted a manually verified analysis of the world’s second largest institutional repository, HAL, by cross-checking its records against 24,430 corrected or retracted publications extracted from the Crossref x Retraction Watch database. Shockingly, they found that 91% of corrections/retractions were not indicated in the repository. Bordignon emphasises that this situation is not unique to HAL, but reflective of repositories across the world.

“91% of corrections/retractions were not indicated in the repository…this situation is…reflective of repositories across the world.”

How to ‘fill the gap’ in effective reporting of corrections

The solution? Bordignon points out that open repositories have a powerful opportunity to ‘fill the gap’ in effective reporting of corrections. However, rather than expecting repository managers to make individual version control decisions for every publication, Bordignon suggests that open repositories:

  • create their own archives
  • clearly display the editorial status of each article
  • include a permanent, bidirectional link to the corrected published version
  • enable automated updates through partnerships with Crossref x Retraction Watch, making use of metadata such as digital object identifiers
  • incorporate platforms that detect and report retractions, such as PubMed, PubPeer, and Scite.

Bordignon provides a stark reminder that omission of corrections/retractions notices from open repositories risks that users may be learning, citing, or even propagating, flawed science; this can ultimately “erode public trust in science”. She urges open repositories to galvanise their position in the fight for research integrity, paving the way for a more streamlined archiving system that leaves readers in no doubt as to the reliability of the information they are accessing.

—————————————————

Do you agree that open repositories need to clearly identify corrected or retracted publications?

]]>
https://thepublicationplan.com/2025/08/06/retractions-and-corrections-are-falling-under-the-radar-should-open-repositories-step-up/feed/ 0 18175
The paper mill problem: are AI tools the answer? https://thepublicationplan.com/2024/08/01/the-paper-mill-problem-are-ai-tools-the-answer/ https://thepublicationplan.com/2024/08/01/the-paper-mill-problem-are-ai-tools-the-answer/#respond Thu, 01 Aug 2024 16:00:21 +0000 https://thepublicationplan.com/?p=16249

KEY TAKEAWAYS

  • In a test run, a new AI-based system developed by scientific publisher Wiley flagged 10–13% of submitted manuscripts as potential fakes.
  • Generative AI tools could help combat the threat posed by paper mills to research integrity.

An AI-based service designed to detect bogus scientific articles flagged 10–13% of submitted manuscripts in a pilot run, according to a blog post by Ivan Oransky for Retraction Watch. The fake papers were caught by publisher Wiley’s Papermill Detection service, which screens submissions ahead of editorial review. The multi-tool system is a promising development in ongoing efforts to ensure the integrity of published research.

Spotting fake articles

Paper mills are paid to produce fake research papers, which can appear very similar to genuine manuscripts. According to Wiley, its new system uses 6 different approaches to identify what it calls “potentially compromised research content”:

  • checking for similarity with existing paper mill papers
  • flagging the use of “tortured phrases
  • identifying authors with unusual publication behaviour
  • verifying the identity of researchers
  • detecting potential misuse of generative AI
  • checking that manuscripts fall within a journal’s scope.

The test run involved over 270 Wiley journals, which rejected between 600–1,000 submitted manuscripts per month once they started using the tool. A spokesperson for Wiley told Retraction Watch that flagged papers would not automatically be rejected, but would be considered by an editor before being processed further. The publisher says it is partnering with Sage and IEEE for its next testing phase, and aims to roll out the service as early as next year.

The test run involved over 270 Wiley journals, which rejected between 600–1,000 submitted manuscripts per month once they started using the tool.

Paper mill problems

Paper mills are a major source of articles that end up being retracted after publication. Most manuscripts retracted in 2023 were published by Hindawi, a subsidiary of Wiley, with a high proportion involving Chinese authors. This lead to a government-initiated review that required all university researchers in China to declare their retracted papers.

Last year, Wiley closed 4 Hindawi journals due to paper mill issues and announced that it will stop using the Hindawi brand. Wiley has since discontinued another 19 journals overseen by Hindawi, which it said was due to portfolio integration.

Possible solutions on the horizon

Investigations into retractions should help ensure the integrity of published research, but there is growing interest in using new tools such as Papermill Alarm to help stop fake papers getting published in the first place. Wiley say their new service will complement the STM Integrity Hub, a resource developed by academic publishers that incorporates Papermill Alarm and other tools to help combat fake science.

While much discussion around developments in AI has focused on possible threats to research integrity, spotting bogus manuscripts could be an area where AI could help restore trust in published science.

————————————————–

Will AI tools that spot fake manuscripts drive paper mills out of business?

]]>
https://thepublicationplan.com/2024/08/01/the-paper-mill-problem-are-ai-tools-the-answer/feed/ 0 16249
Why are retraction rates rising? https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/ https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/#respond Wed, 24 Jul 2024 10:57:46 +0000 https://thepublicationplan.com/?p=16215

KEY TAKEAWAYS

  • The retraction rate for biomedical science papers with corresponding authors based at European institutions quadrupled between 2000 and 2020.
  • Unreliable data has emerged as a leading reason for retraction, while duplication remains a key factor.

Research misconduct remains a major concern, with increasing efforts dedicated to monitoring retraction rates – and the underlying reasons. An analysis recently published in Scientometrics and discussed in Nature news uncovered a quadrupling of retraction rates since 2000 among biomedical science articles with corresponding authors based at European institutions, from about 11 per 100,000 articles to almost 45 per 100,000 in 2020.

Why are articles retracted?

Fabián Freijedo-Farinas and colleagues reviewed over 2,000 retracted English-, Spanish-, and Portuguese-language articles collated by Retraction Watch to identify underlying reasons. Research misconduct was the most prevalent factor, accounting for 67% of cases, while 16% of retractions were due to honest errors (with no reason provided for the remainder). Research misconduct-related retractions were due to:

Reasons have shifted over time, with authorship and affiliation issues falling from one of the top reasons to joint 5th of 7. Duplication has remained steady as a cause, while retractions due to unreliable data – including bias and lack of original data availability – have skyrocketed. The authors suggest paper mills have a major role to play.

However, it’s not the same story across Europe: of the 4 countries with the most retractions, the proportion of duplication-related retractions has fallen in the UK but substantially increased in Italy and Spain.

Why are retraction rates increasing?

Arturo Casadevall, who identified similar rates of research misconduct-related retractions in a 2012 analysis, commented that the overall hike in retraction rates could be due to authors, institutions, and journals increasingly viewing retraction as the best route to correct the scientific record.

The overall hike in retraction rates could be due to authors, institutions, and journals increasingly viewing retraction as the best route to correct the scientific record.

In addition, publications have increasingly drawn the attention of online sleuths, who may raise concerns with journals, according to research integrity specialist Sholto David. New digital technologies are also making it easier to screen publications for suspicious text or data. Retraction Watch co-founder Ivan Oranksy believes use of plagiarism-detection software could be partially responsible for the increase; looking to the future, tools like image manipulation detectors could mean retraction rates rise further.

After reading the article, click here for a brief survey and to receive your authorization code for your Credit Tracker. This serves as documentation for the activity.

————————————————–

How much do you think increasing use of image manipulation detectors will impact retraction rates?

]]>
https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/feed/ 0 16215
Crossref’s acquisition of the Retraction Watch database: combined forces for research integrity https://thepublicationplan.com/2024/01/23/crossrefs-acquisition-of-the-retraction-watch-database-combined-forces-for-research-integrity/ https://thepublicationplan.com/2024/01/23/crossrefs-acquisition-of-the-retraction-watch-database-combined-forces-for-research-integrity/#respond Tue, 23 Jan 2024 10:13:15 +0000 https://thepublicationplan.com/?p=14996

KEY TAKEAWAY

  • Crossref has acquired the Retraction Watch database, resulting in the largest freely available database of retracted articles to date.

Launched in 2018, and curated by The Center for Scientific Integrity, the Retraction Watch database has grown to include approximately 43,000 retracted research papers. In September 2023, it was announced that the database had been acquired by Crossref and made freely available.

Combined forces

This new approach marries Retraction Watch’s large database with Crossref’s focus on open metadata, digital object identifiers, and collection of over 130 million records. Publishers will now register retraction notices with Crossref, and Retraction Watch will continue to update the database. Crossref’s dataset also contains over 14,000 retracted articles, meaning that the combined database will be the largest of its kind, comprising around 50,000 retractions. This combination of volume, metadata, and open access will be a first and is an approach that Crossref and Retraction Watch hope will ensure sustainability in the long term.

This sustainability is important as the volume of research articles continues to grow. The retraction of inaccurate publications is vital to maintaining the integrity of the scientific record. Retractions should be clearly publicised to prevent the spread of inaccurate or misleading information by researchers unknowingly citing retracted works.

Looking forwards

Retraction Watch co-founder, Ivan Oransky, and Crossref’s Director of Product, Rachael Lammey, hope that the organisations’ shared expertise will “…greatly increase the openly available information on retractions [which] … in turn helps the community benefit from and rely upon more comprehensive information…”.

The organisations’ shared expertise will “…greatly increase the openly available information on retractions [which] … in turn helps the community benefit from and rely upon more comprehensive information…”.

————————————————

How likely are you to check the Retraction Watch database when next reading or citing a paper?

]]> https://thepublicationplan.com/2024/01/23/crossrefs-acquisition-of-the-retraction-watch-database-combined-forces-for-research-integrity/feed/ 0 14996 The hunter or the hunted: why do authors publish in predatory journals? https://thepublicationplan.com/2023/09/07/the-hunter-or-the-hunted-why-do-authors-publish-in-predatory-journals/ https://thepublicationplan.com/2023/09/07/the-hunter-or-the-hunted-why-do-authors-publish-in-predatory-journals/#respond Thu, 07 Sep 2023 14:28:30 +0000 https://thepublicationplan.com/?p=14350

KEY TAKEAWAYS

  • Unawareness and unethical motivations are the main reasons authors cite for publishing in predatory journals.
  • Proposed strategies to combat predatory publishing include improving education on publication ethics, creating new, credible publishing platforms, enhancing accountability through open peer review, and tightening regulation of journal recommendation lists.

Predatory journals are a deep-rooted issue in open access publishing, contributing an estimated 420,000 articles in 2014 alone. Despite their widespread presence, the motivations that drive authors toward these journals remain largely unknown. The topic is explored in Chapter 7 of Simon Linacre’s book, The Predator Effect: Understanding the Past, Present and Future of Deceptive Academic Journals. An excerpt from the chapter is available on Retraction Watch.

A review of the limited literature on author motivations highlighted 2 main reasons authors opt for predatory journals:

  • lack of awareness about a journal’s dubious reputation
  • unethical motivations, including incentives tied to career advancement and disillusionment with traditional academic publishing.

Studies looking at why authors might be tempted to engage with predatory publishers identified several factors that likely influence their decisions.

One proposed strategy to tackle the problem is to educate researchers about the issues with predatory journals and review how incentives can tempt authors to publish in them. Other recommendations include:

Linacre suggests that third-party help may also be needed to support academic authors in successful research publication.

————————————————–

What do you think – would providing further support and improved publication ethics education to academic authors reduce the number publishing in predatory journals?

]]>
https://thepublicationplan.com/2023/09/07/the-hunter-or-the-hunted-why-do-authors-publish-in-predatory-journals/feed/ 0 14350
Do bad apples in academic publishing really spoil the barrel? https://thepublicationplan.com/2023/08/22/do-bad-apples-in-academic-publishing-really-spoil-the-barrel/ https://thepublicationplan.com/2023/08/22/do-bad-apples-in-academic-publishing-really-spoil-the-barrel/#respond Tue, 22 Aug 2023 14:33:55 +0000 https://thepublicationplan.com/?p=14315

KEY TAKEAWAYS

  • Although the number of retracted scientific articles is rising, their impact on clinical practice is unknown.
  • Including retracted articles in reviews may even be acceptable, depending on the reason for retraction.

With existing concerns about the potential impact of retracted scientific articles making their way into systematic reviews and clinical practice guidelines, Dr Jonathan Livingstone-Banks questioned in an article published on Gavi whether this makes a material difference to the interpretation of the affected reviews or guidelines.

Retractions are a hot topic in academic publishing, with an ever-increasing number of papers appearing in the Retraction Watch database. Although progress has been made in the retraction process since its inception, the founders of Retraction Watch have called for more action, such as compensating those who detect fraudulent papers, and perhaps even implementing the ‘bug bounty’ model used in the computer security industry.

Although Dr Livingstone-Banks acknowledges that a striking number of retracted papers are included in reviews and clinical practice guidelines, he expresses doubts over whether they actually impact clinical practice. With over 160,000 reviews published between 2000 and 2019, he suggests that the proportion citing retracted studies is likely to be small, and questions whether they are the high-quality articles that influence decision making. Further, even when they are included in high-quality articles, the meta-analysis process is designed to reduce their impact (eg, by assigning lower weighting to lower quality studies, or by conducting sensitivity analyses that remove questionable studies to test the reliability of the meta-analysis conclusions).

With over 160,000 reviews published between 2000 and 2019, the proportion that cite retracted studies is likely to be small.

Dr Livingstone-Banks notes that Cochrane, a global non-profit group that publishes many meta-analyses across healthcare, conducted a study on whether retracted studies affect their reviews. Cochrane does not always exclude retracted papers and instead considers the reason for retraction. Reasons for exclusion may include clear evidence that the findings are unreliable, either due to falsified data, plagiarism, faked peer review, or ‘major error’. However, a blanket policy of excluding retracted studies may omit relevant data (eg, in the case of lack of ethical approval rather than data error) or introduce publication bias.

We look forward to following how other publishers choose to deal with retracted articles in the future.

————————————————–

What do you think – should all retracted papers be excluded from reviews and meta-analyses?

]]>
https://thepublicationplan.com/2023/08/22/do-bad-apples-in-academic-publishing-really-spoil-the-barrel/feed/ 0 14315
How do nonsense papers make their way into reputable journals? https://thepublicationplan.com/2022/03/17/how-do-nonsense-papers-make-their-way-into-reputable-journals/ https://thepublicationplan.com/2022/03/17/how-do-nonsense-papers-make-their-way-into-reputable-journals/#respond Thu, 17 Mar 2022 13:00:35 +0000 https://thepublicationplan.com/?p=10979

KEY TAKEAWAYS

  • Nature News article reveals scammers are exploiting the publication process for journals’ special issues to get poor-quality articles published.
  • Hundreds of articles have been retracted or flagged as concerning, with further retractions expected in 2022.

Impersonation fraud is becoming an increasing problem for journals. As outlined by Holly Else in a Nature News article, publishers have uncovered networks of scammers posing as legitimate researchers to gain access to reputable journals and get poor-quality (and often nonsensical) articles published – a phenomenon particularly prevalent for special issue editions. These fraudulent activities threaten the credibility of journals and have led to the retraction of hundreds of articles by top publishers, with the number expected to rise in 2022.

Special issues have been specifically targeted by fraudsters as they are often overseen by expert guest editors who work independently from the journal. A notable example was reported in 2020 by Springer Nature’s Journal of Nanoparticle Research, after scammers posing as eminent scientists tricked the journal into allowing them to manage a special issue on nanotechnology in healthcare. When examining the submissions, the journal noticed that most of the manuscripts were of low quality and/or did not align with the theme of the special issue. Subsequent investigation revealed that the guest editors had used fake domain names that at first sight looked like the real scientists’ institutional email addresses. Many abnormalities in the peer review process were also identified.

“All of the evidence points to an organised network that tries – in this case successfully – to infiltrate scientific journals with the objective of easily publishing manuscripts from pseudo-scientists or less-productive researchers who want to appear in respectable journals.”

(Editorial Board, Journal of Nanoparticle Research)

It is unclear why fraudulent organisations wish to exploit the publication process to publish sham papers. Guillaume Cabanac, a computer scientist who has helped to uncover fabricated papers in special issues, notes that the pressure to publish in academia may be one reason driving this phenomenon. However, Retraction Watch co-founder Ivan Oransky argues that these low-quality papers, whose titles often do not make sense, are unlikely to have long-term benefit for the academic resume. Whatever the reason, it appears that the practice is becoming more sophisticated and prevalent. In 2021, Elsevier and Springer Nature each issued concerns for over 400 papers published as part of special issues in certain journals, resulting in the retraction of hundreds of articles.

Elsevier and Springer Nature indicated that they have introduced extra checks and are working to develop computerised tools to identify and prevent attempts to exploit the publication process. We hope that all publishers consider the threat level to their journals and find ways to minimise the risk of the ‘special issue’ scam.

—————————————————–

What do you think – should journals continue to use guest editors to curate special issue publications?

]]>
https://thepublicationplan.com/2022/03/17/how-do-nonsense-papers-make-their-way-into-reputable-journals/feed/ 0 10979
Detectives are on the hunt for fake research papers: Nature analysis investigates the paper mill problem https://thepublicationplan.com/2021/08/03/detectives-are-on-the-hunt-for-fake-research-papers-nature-analysis-investigates-the-paper-mill-problem/ https://thepublicationplan.com/2021/08/03/detectives-are-on-the-hunt-for-fake-research-papers-nature-analysis-investigates-the-paper-mill-problem/#respond Tue, 03 Aug 2021 09:56:38 +0000 https://thepublicationplan.com/?p=9643 Earlier this year, a Nature analysis examined the problem of paper mills – companies that produce fake scientific manuscripts to order – by evaluating papers publicly linked to them. Although not a new problem, the feature in Nature describes how it was brought to the forefront in a January 2020 blog post by research-integrity ‘sleuths’ who posted a list of published papers that they believed came from paper mills.

The analysis revealed that by March this year, these detectives had publicly flagged over 1,300 articles that potentially originated from paper mills, and that journals had retracted at least 370 alleged paper mill articles and added expressions of concern to a further 45. Experts suspect this may not be the full extent of the problem and there may be thousands more articles linked to paper mills published in the literature. There are concerns over the size of the problem and the detriment to science as a whole. Prof Jennifer Byrne, a molecular oncology researcher at the University of Sydney and expert in exposing flawed papers had this to say about faked cancer studies:    

“People die from cancer – it is not a game. It is important that the literature describes the work that takes place.”

A number of publishers and independent analysts are working to combat the problem of paper mills and prevent the publication of these submissions. Red flags that could indicate a paper mill include:

  • similar features in papers from different authors at different institutions
  • irregularities with western blots
  • similar titles
  • identical bar charts representing different experiments
  • identical plots of flow cytometry analyses
  • incorrect nucleotide sequences or reagents
  • papers from Chinese hospitals (the source of a high number of retracted articles)
  • non-academic email addresses
  • no raw data
  • poor English.

Publishers are also employing other tactics to spot articles from paper mills, including developing software to identify duplicate images and using analysts to detect problems with submitted manuscripts. However, even where these measures succeed, limits on data sharing between journals mean papers could still be published at another journal. With paper mills increasingly aware of the measures employed to stop their submissions and using more sophisticated methods to avoid detection, the fight against fake research is not yet over.

—————————————————–

What do you think – do journals and publishers need to do more to tackle paper mills?

—————————————————–

]]>
https://thepublicationplan.com/2021/08/03/detectives-are-on-the-hunt-for-fake-research-papers-nature-analysis-investigates-the-paper-mill-problem/feed/ 0 9643
A bot is monitoring new publications for retracted article citations https://thepublicationplan.com/2021/06/01/a-bot-is-monitoring-new-publications-for-retracted-article-citations/ https://thepublicationplan.com/2021/06/01/a-bot-is-monitoring-new-publications-for-retracted-article-citations/#respond Tue, 01 Jun 2021 14:54:46 +0000 https://thepublicationplan.com/?p=8975

Despite steps taken by researchers and journals to maintain research integrity, a small proportion of published papers (around 0.04% as of 2018) go on to be retracted. Lately, retractions of high profile articles related to COVID-19 have been well publicised, although this is not the norm for other topics. Because retractions are often inadequately communicated, these papers may still be cited in other publications without mention of the retraction—potentially misleading the audience or even invalidating meta-analysis results. A recent article in Nature Index discusses the risks that retractions pose to the scientific literature and the steps being taken to tackle this issue.

One such strategy is the scite Reference Check bot, which flags publications that have cited retracted articles post-retraction on Twitter. The tool can also be used to check the references in an uploaded manuscript, for a small fee. While the tool will incorrectly flag citations which appropriately specify that the article was retracted, authors rarely provide this kind of clarification within the citation. As such, the tool is not expected to generate many false positives.

The bot is a new addition to the arsenal of resources that can be used to identify retracted articles.

The bot is a new addition to the arsenal of resources that can be used to identify retracted articles, including the Retraction Watch Database, Zotero and Open Retractions. These tools acknowledge the threat that retractions pose to science and represent a meaningful step towards achieving greater research integrity.

——————————————————–

When developing a publication, which strategy would you be most likely to use to check whether any of your references have been retracted?

——————————————————–


]]>
https://thepublicationplan.com/2021/06/01/a-bot-is-monitoring-new-publications-for-retracted-article-citations/feed/ 0 8975