KEY TAKEAWAYS
- A literature-mining project featured in Nature News showed that some papers heavily rely on retracted research.
- Technological tools, such as Guillaume Cabanac’s ‘Problematic Paper Screener’, could be part of the solution.

The results of a project featured in Nature News showed that problematic research continues to amass citations in the literature, even after retraction. Perhaps unwittingly, some authors are citing large numbers of retracted papers, which can cause questions to be raised about their own work once it is published. With retracted works accounting for as many as 65% of citations in some papers, there is a drive to harness technology to solve the problem.
You are what you cite
Worryingly, problematic papers can continue to be cited long after retraction. Although not a definitive sign of misconduct, heavily relying on research that has been withdrawn can retrospectively undermine a paper’s reliability. Unfortunately, no system exists for alerting researchers to retractions that may impact papers they have already authored.
Hoping to change this, “research integrity sleuth” Guillaume Cabanac, who is behind the project reported in Nature News, has developed tools such as his ‘Feet of Clay’ detector, which flags papers that cite retracted works (and his earlier ‘annulled detector’, which tracks the retracted papers themselves).
As well as encouraging publishers to conduct regular checks and notify authors of any retractions they have cited, Cabanac urges authors to make use of these and other tools, such as plug-ins that can automatically flag papers that have received comments on PubPeer, before submitting papers.
“You always have to double-check what you’re basing your work on.”
Tools to clean up the literature
As Cabanac himself reports, the Feet of Clay detector is just the latest addition to his Problematic Paper Screener, an automated system for flagging papers that may warrant further scrutiny. Launched in 2021, the screener tracks the global landscape of retractions and uses multiple detectors to automatically mine the literature for signs of potential misconduct, such as:
- ‘tortured phrases’ typically seen when AI re-writes existing scientific content
- ‘fingerprints’ associated with random paper generators such as SCIgen or Mathgen
- nonsensical content, such as cell lines or nucleotide sequences that have been fabricated
- citations for journals known to have been hijacked.
Cabanac hopes that this software will facilitate continuous evaluation of published literature, with over 875,000 papers having been flagged and assessed via the system so far. He calls publishers and authors to action: “A combined preventive and curative effort from all involved is key to sustaining the reliability of the scientific literature — a crucial undertaking for science and for public trust.”
————————————————–
Categories
