Research integrity – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 17 Dec 2025 15:05:57 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Research integrity – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 When politics meets publishing: researchers fight back https://thepublicationplan.com/2025/12/17/when-politics-meets-publishing-researchers-fight-back/ https://thepublicationplan.com/2025/12/17/when-politics-meets-publishing-researchers-fight-back/#respond Wed, 17 Dec 2025 15:05:56 +0000 https://thepublicationplan.com/?p=18549

KEY TAKEAWAYS

  • US government executive orders targeting EDI programmes are prompting federally funded journals to censor demographic data and equity-focused language.
  • Authors and editors are pushing back to ensure data are made available and to maintain the integrity of the scientific record.

Following US government executive orders to end federal equity, diversity, and inclusion (EDI) programmes and to only recognise two sexes, The BMJ has emphasised the importance of retaining sex and gender data in published research. In an article in Undark, Peter Andrey Smith highlights another example of the scientific community pushing back against federal pressure to remove EDI-related data.

Authors make a stand

Smith describes the case of anthropologist Tamar Antin and co-authors, who faced an unusual request from the federally funded journal Public Health Reports following acceptance of their paper on tobacco use. The editors requested removal of the word “equitably” and demographic data, citing compliance with executive orders. Rather than grant the request, Antin and co-authors withdrew their paper entirely and went public. This “act of defiance” was met with widespread support from the scientific community, who argued that removing demographic data doesn’t just affect one paper’s conclusions – it hampers future studies by denying other scientists the opportunity to reanalyse findings or build on existing research.

“Removing demographic data doesn’t just affect one paper’s conclusions – it hampers future studies by denying other scientists the opportunity to reanalyse findings or build on existing research.”

The bigger picture

Smith also shares examples of federally funded researchers requesting:

  • withdrawal
  • removal of authors from bylines
  • specific wording changes

to accepted papers, citing the political landscape. While this affects a minority of submissions directly, maintaining the integrity of the scientific record is paramount.

Looking ahead, the Committee on Publication Ethics’ position statement emphasises that publishing decisions and language choices should not be influenced by politics or government policies, and there is no place for retractions to censor the scientific record.

————————————————

Have the US executive orders around EDI directly impacted your work?

]]>
https://thepublicationplan.com/2025/12/17/when-politics-meets-publishing-researchers-fight-back/feed/ 0 18549
Over 100 institutions back eLife’s reviewed preprint model https://thepublicationplan.com/2025/11/26/over-100-institutions-back-elifes-reviewed-preprint-model/ https://thepublicationplan.com/2025/11/26/over-100-institutions-back-elifes-reviewed-preprint-model/#respond Wed, 26 Nov 2025 09:29:37 +0000 https://thepublicationplan.com/?p=18438

KEY TAKEAWAY

  • More than 100 institutions have declared their support for eLife’s reviewed preprint model, following the journal’s loss of impact factor.

Rather than only accepting papers recommended for publication by peer reviewers, eLife publishes all reviewed research as reviewed preprints. However, Clarivate, the provider of Web of Science, only indexes peer reviewed content, resulting in the loss of eLife’s impact factor for 2025. Rather than changing their publishing model, eLife agreed to be partially indexed in Web of Science’s Emerging Sources Citation Index (ESCI). But how has this been received?

As reported in Research Information, eLife surveyed over 100 institutions and funders to assess how their publishing model is viewed. Over 95% of respondents endorsed non-traditional publishing approaches like eLife’s, confirming publications will continue to be factored into hiring, promotion, and funding decisions.

Promoting integrity or outdated metrics?

Dr Nandita Quaderi, Senior Vice President and Editor-in-Chief of the Web of Science at Clarivate, stressed that policies must be applied universally to protect research integrity. Quaderi warned that “cover-to-cover indexing of journals in which publication is decoupled from validation by peer review risks allowing untrustworthy actors to benefit from publishing poor quality content”.

On the other hand, Ashley Farley, Senior Officer of Knowledge & Research Services at the Gates Foundation, believes Web of Science’s policy “reinforces outdated publishing metrics that hinder innovation”, while Damian Pattinson, Executive Director at eLife, noted that with increasing emphasis on open science, “eLife remains confident that its model represents the future of scholarly publishing – one that prioritises scientific quality, transparency, and integrity over outdated prestige metrics”.

“eLife remains confident that its model represents the future of scholarly publishing – one that prioritises scientific quality, transparency, and integrity over outdated prestige metrics.”
– Damian Pattinson, eLife

As debates over the future of the impact factor continue, Farley believes that “indexers must evolve to support responsible, transparent models like eLife’s”.

—————————————————

Are journal impact factors important when deciding where to publish research?

]]>
https://thepublicationplan.com/2025/11/26/over-100-institutions-back-elifes-reviewed-preprint-model/feed/ 0 18438
Restoring trust in science: a proposed framework for verifying researcher identity https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/ https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/#respond Wed, 12 Nov 2025 14:46:39 +0000 https://thepublicationplan.com/?p=18386

KEY TAKEAWAYS 

  • The International Association of Scientific, Technical & Medical Publishers’ Research Identity Verification Framework aims to tackle fraudulent submissions, including from paper mills.
  • The framework of layered identity checks for researchers, peer reviewers, and editors aims to raise obstacles to misconduct and enhance transparency, while maintaining inclusivity for all authentic researchers.

Research is facing an unprecedented integrity challenge, with sophisticated paper mills publishing poor-quality and fraudulent papers by unverifiable researchers and fake personas. To combat this issue, the International Association of Scientific, Technical & Medical Publishers (STM) has developed a Research Identity Verification Framework, released for community review. In an interview with Retraction Watch, Hylke Koers, Chief Information Officer at STM, shared how the framework could be used by journals and institutions to verify the identity of researchers.

Why is the framework needed?

Currently, publishers rely on time-consuming manual checks to validate the identity of contributors such as authors, peer reviewers, or guest editors. These processes do not match the speed and organisation of fraudulent networks. Part of the problem lies in the ease with which untraceable digital identities can be created and used to manipulate key parts of the publishing pipeline, for example, suggesting a fake reviewer. New approaches are needed to tackle this growing issue.

How will the framework be used?

The framework introduces a layered, systemic method of identity verification. Suggested methods include asking individuals to:

  • validate an institutional email address
  • sign in via ORCiD or use ORCiD Trust Markers
  • provide a government document, such a passport or driving licence.

Koers notes that implementing these checks would make impersonation or identity theft more difficult and improve accountability, while multiple options for verification retain accessibility. Publishers are advised to assess the level of risk, asking “how confident can we be that this person is who they claim to be, and that the information they’ve provided is genuine?”.

Implementing these checks would make impersonation or identity theft more difficult and improve accountability”

What are the next steps?

The success of the Research Identity Verification Framework will rely on widespread adoption. The STM plans to collaborate with early adopters to develop practical implementation pathways and refine future recommendations.

Koers notes that ultimately, no framework can eliminate all fraud, but making it more difficult to act fraudulently and easier to trace and respond to publishing misconduct should have a positive impact.

—————————————————

Do you believe STM’s Research Identity Verification Framework will reduce academic fraud?

]]>
https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/feed/ 0 18386
Safeguarding scientific image quality and integrity: what more can be done? https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/ https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/#respond Wed, 29 Oct 2025 15:38:19 +0000 https://thepublicationplan.com/?p=18377

KEY TAKEAWAYS

  • Scientific image editing serves a vital role in clear communication, but seeking presentation clarity must not compromise data integrity.
  • Combatting image manipulation requires systematic collaboration across the research ecosystem, including standardised guidelines and new verification technologies.

As concerns mount over image manipulation in scientific publishing, the research community has begun developing new strategies to balance visual clarity with data integrity. Writing in Nature, Sara Reardon explores the “fine line between clarifying and manipulating”, highlighting the challenge of making figures both accessible and faithful to original data.

The art and science of visual presentation

Scientific images often require editing for clarity, like adjusting brightness, adding scale bars, or enhancing contrast. While such modifications are essential for effective scientific communication, a 2021 study by Helena Jambor and colleagues revealed that poorly presented figures remain surprisingly common, suggesting researchers need better training in visual data presentation.

When enhancement becomes manipulation

The boundary between legitimate clarification and misconduct can be perilously thin. Science integrity consultant Elisabeth Bik warns that even minor edits – such as cloning image sections to cover dust particles – can undermine data credibility. Echoing a seminal 2004 article, Bik emphasises that “the images are the data”, meaning they should present the results actually observed rather than those the researchers expected. Any undisclosed alteration that changes the scientific message could constitute misconduct. As Reardon notes, the cardinal rule remains to “show your work” – enhancing clarity without obscuring underlying data.

“The boundary between legitimate clarification and misconduct can be perilously thin… the cardinal rule remains to ‘show your work’ – enhancing clarity without obscuring underlying data.”

Detection and prevention strategies

Phill Jones examines potential systemic solutions to what Bik calls science’s “nasty Photoshop problem” in The Scholarly Kitchen. Journals increasingly conduct pre-publication screening using image-integrity specialists or AI tools that have demonstrated substantial promise in identifying manipulated images. Guidelines such as those from the International Association of Scientific, Technical & Medical Publishers aim to standardise best practice, while individual journals are also establishing specific image integrity requirements. Beyond journals:

  • Institutions are urged to provide training and embed image integrity expectations into research culture.
  • Post-publication peer-review platforms also play a role in identifying problematic images after publication.

Looking ahead, technical innovations offer promise. Jones highlights developments such as encrypted hashes and digital ‘signatures’ embedded in images, akin to secure web certificates, that could enable reliable verification of image authenticity. Ongoing collaboration and systematic change across the research ecosystem will be required to ensure scientific images are both clear and credible.

—————————————————

Are current image integrity detection tools sufficient to prevent manipulation in scientific publishing?

]]>
https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/feed/ 0 18377
Wiley develops AI guidelines in response to demand from researchers https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/ https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/#respond Wed, 01 Oct 2025 08:25:23 +0000 https://thepublicationplan.com/?p=18314

KEY TAKEAWAYS

  • Wiley embraces a future-looking AI policy with guidelines on responsible and ethical use, with human oversight, to ensure the integrity of publications.
  • The guidelines also provide tips on how AI can be used, effective prompt engineering, and choosing the best AI tools for the project.

Artificial intelligence (AI) is becoming more widely adopted within scientific publishing, yet many authors remain unsure how to use it effectively while maintaining the integrity of their research. Highlighted by an article in Research Information, Wiley have released AI guidelines for book authors in response to findings that ~70% of researchers want publisher guidance on using AI.

The guidelines include:

  • Reviewing terms and conditions: authors should regularly review terms and conditions to ensure that their chosen AI technology does not claim ownership over the content or limit its use.
  • Maintaining human oversight: AI should assist but not replace authors. Authors must take full responsibility for their work and review any AI-generated content before submission.
  • Disclosing AI use: authors should document all AI use, including its purpose and impact on findings, and describe how AI-generated content was verified.
  • Ensuring protection of rights: authors must ensure that the AI used (or its provider) does not gain rights over the authors’ material, including for the purposes of training the AI.
  • Using AI responsibly and ethically: authors must comply with data protection laws, avoid using AI to copy the style or voice of others, fact-check the accuracy of AI-generated content, and be mindful of potential biases.

The guidance also provides recommendations on how to write prompts and select AI tools, as well as suggestions on use cases for authors newer to AI:

  • analysing research and recognising themes across sources
  • exploring ways to simplify complicated topics
  • adapting work so it is relatable for different audiences
  • polishing work by refining language and checking for consistency.

The guidelines complement Wiley’s existing generative AI framework for journal publications. As stated by Jay Flynn (Wiley EVP & General Manager, Research & Learning), “writers and researchers are already using AI tools, whether publishers like it or not. At Wiley, we’d rather embrace this shift than fight it”.

“Writers and researchers are already using AI tools, whether publishers like it or not. At Wiley, we’d rather embrace this shift than fight it”
– Jay Flynn, Wiley EVP & General Manager, Research & Learning

—————————————————

What do you think – should publishers give authors more guidance on how to use AI appropriately?

]]>
https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/feed/ 0 18314
Difficulty assigning peer review is exacerbating publication delays: is it time for a new approach? https://thepublicationplan.com/2025/08/19/difficulty-assigning-peer-review-is-exacerbating-publication-delays-is-it-time-for-a-new-approach/ https://thepublicationplan.com/2025/08/19/difficulty-assigning-peer-review-is-exacerbating-publication-delays-is-it-time-for-a-new-approach/#respond Tue, 19 Aug 2025 14:11:27 +0000 https://thepublicationplan.com/?p=18241

KEY TAKEAWAYS

  • Challenges with securing peer reviewers may not be linked to a “shrinking reviewer pool” but underutilisation of the wider global pool.
  • New approaches, such as developing fit-for-purpose search tools, engaging junior experts, and offering viable compensation, may help journals source new peer reviewers.

Peer review is key to scientific integrity, so why is it becoming increasingly difficult for journals to secure peer reviewers? This topic was explored in a recent Springer Nature article authored by Arunas Radzvilavicius. The huge increase in peer review requests through the publication boom of the last 20 years has made it harder for journals to match peer reviewers. But does this reflect a shrinking reviewer pool?

In fact, the number of potential reviewers is growing at a faster rate than publications, according to Radzvilavicius. This suggests the ‘reviewer shortage’ is due to limitations in the methods for matching reviewers. Radzvilavicius describes barriers to securing peer reviewers:

  • repeat invitations to the same individuals
  • high reviewer workloads
  • distrust of commercial publishers
  • lack of viable incentives.

“Journals should tap into the global reviewer pool to address the ‘reviewer shortage’.”

Alternative approaches to finding reviewers

Radzvilavicius emphasises journals should tap into the global reviewer pool to address the ‘reviewer shortage’. Journals could:

  • Substitute Google Scholar for more advanced, impartial peer review tools. Radzvilavicius describes Google Scholar as a go-to method of sourcing reviewers, but its algorithms are unclear and prone to bias. Fit-for-purpose tools should be developed with global coverage, regular updates, automated invitation/acceptance rate tracking, and filters to avoid over-used reviewers.
  • Utilise AI. Automating time-intensive tasks, such as verifying statistics and ethics statements, through large language models would significantly reduce reviewers’ workloads.
  • Engage junior expert reviewers. Highlight the opportunities for career progression and acknowledgement that peer review offers, and provide workshops and networking events.
  • Introduce financial compensation. To address concerns that incentivising peer review may impact quality, Radzvilavicius argues that the opposite may be true: “paying for the service allows you to demand a high-quality product”.  

Radzvilavicius emphasises that there are “plenty of reviewers worldwide” – we just need better ways of finding them. Changing the approach could offer broad benefits, accelerating quality peer review.

—————————————————

Do you believe there is a shortage of suitable peer reviewers, impacting the speed of peer review?

]]>
https://thepublicationplan.com/2025/08/19/difficulty-assigning-peer-review-is-exacerbating-publication-delays-is-it-time-for-a-new-approach/feed/ 0 18241
Retractions and corrections are falling under the radar: should open repositories step up? https://thepublicationplan.com/2025/08/06/retractions-and-corrections-are-falling-under-the-radar-should-open-repositories-step-up/ https://thepublicationplan.com/2025/08/06/retractions-and-corrections-are-falling-under-the-radar-should-open-repositories-step-up/#respond Wed, 06 Aug 2025 08:53:35 +0000 https://thepublicationplan.com/?p=18175

KEY TAKEAWAYS

  • Most open access repositories have evolved without sufficient means to communicate corrections or retractions.
  • Metadata, such as DOIs, could be used to link all article versions and ensure corrections/retractions are clearly indicated to readers.

Open access repositories have an important role in disseminating scientific research. But what happens when a journal corrects or retracts a publication? A recent LSE Impact Blog article describes Frédérique Bordignon’s alarming discovery around how well this is captured by repositories.

Open repositories’ ‘blind spot’ to corrections and retractions

As Bordignon explains, most journals display up-to-date editorial notices alongside publications, although clarity can vary. On the other hand, open repositories do not necessarily pull through information on correction/retraction from published counterparts, and guidance from the Confederation of Open Access Repositories is lacking.

To examine the topic further, Bordignon’s team conducted a manually verified analysis of the world’s second largest institutional repository, HAL, by cross-checking its records against 24,430 corrected or retracted publications extracted from the Crossref x Retraction Watch database. Shockingly, they found that 91% of corrections/retractions were not indicated in the repository. Bordignon emphasises that this situation is not unique to HAL, but reflective of repositories across the world.

“91% of corrections/retractions were not indicated in the repository…this situation is…reflective of repositories across the world.”

How to ‘fill the gap’ in effective reporting of corrections

The solution? Bordignon points out that open repositories have a powerful opportunity to ‘fill the gap’ in effective reporting of corrections. However, rather than expecting repository managers to make individual version control decisions for every publication, Bordignon suggests that open repositories:

  • create their own archives
  • clearly display the editorial status of each article
  • include a permanent, bidirectional link to the corrected published version
  • enable automated updates through partnerships with Crossref x Retraction Watch, making use of metadata such as digital object identifiers
  • incorporate platforms that detect and report retractions, such as PubMed, PubPeer, and Scite.

Bordignon provides a stark reminder that omission of corrections/retractions notices from open repositories risks that users may be learning, citing, or even propagating, flawed science; this can ultimately “erode public trust in science”. She urges open repositories to galvanise their position in the fight for research integrity, paving the way for a more streamlined archiving system that leaves readers in no doubt as to the reliability of the information they are accessing.

—————————————————

Do you agree that open repositories need to clearly identify corrected or retracted publications?

]]>
https://thepublicationplan.com/2025/08/06/retractions-and-corrections-are-falling-under-the-radar-should-open-repositories-step-up/feed/ 0 18175
Is high-volume publishing threatening research integrity? https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/ https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/#respond Tue, 01 Jul 2025 11:39:04 +0000 https://thepublicationplan.com/?p=18053

KEY TAKEAWAYS

  • A recent analysis revealed ~20,000 scientific authors publishing impossibly high numbers of articles.
  • High-volume publishing in the pursuit of inflated metrics represents a threat to research integrity.

We have reported previously on the rising numbers of highly prolific scientific authors. Dalmeet Singh Chawla recently highlighted this issue in Chemical & Engineering News, discussing findings that ~20,000 scientists from Stanford’s top 2% list publish an “implausibly high” number of papers. Singh Chawla explored the implications of high-volume publishing on research integrity, as well as potential solutions.

Study findings

The study, published in Accountability in Research, examined the publication patterns of ~200,000 researchers spanning 22 distinct disciplines, from Stanford University’s list of top 2% scientists (based on citation metrics). It found that:

  • around 10% (20,000 scientists) produced an impossibly high volume of publications
  • some scientists published hundreds of studies per year, with hundreds or even thousands of new co-authors
  • approximately 1,000 were early-career scientists with ≤10 years’ academic experience.

Impact on research integrity

Analysis authors, Simone Pilia and Peter Mora, blame the surprising number of hyperprolific authors on a culture that rewards publication quantity through high scores on metrics. They suggest that this not only compromises research quality but leads to some scientists, “particularly the younger ones”, feeling pressured. Pilia and Mora linked the incentive to churn out large quantities of publications with “unethical practices” such as the inclusion of co-authors who have not made adequate contributions to the research. Based on their findings, Pilia and Mora warn that normalising high-volume publishing poses a significant threat to the fundamental academic process.

“Normalising high-volume publishing poses a significant threat to the fundamental academic process.”

A divisive solution?

Pilia and Mora propose adjusting metrics for scientists exceeding publication and co-authorship thresholds. However, according to Singh Chawla, information scientist Ludo Waltman fears that such adjustments would make research evaluation too complex and confusing. He proposes that research assessment should focus less on metrics and more on a wider range of research activities.

The reliability of metrics for research evaluation is an ongoing topic of discussion within the scientific community, and this latest research serves as a reminder for authors to keep research integrity at the heart of their publication decisions.

————————————————–

Do you think high-volume publishing undermines research integrity?

]]>
https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/feed/ 0 18053
Are open science metrics at odds with research assessment reform? https://thepublicationplan.com/2025/06/18/are-open-science-metrics-at-odds-with-research-assessment-reform/ https://thepublicationplan.com/2025/06/18/are-open-science-metrics-at-odds-with-research-assessment-reform/#respond Wed, 18 Jun 2025 11:40:53 +0000 https://thepublicationplan.com/?p=17976

KEY TAKEAWAYS

  • The key goals of reforming research assessment include reduced reliance on counterproductive, citation-based metrics and promotion of open science.
  • New metrics designed to incentivise open science risk undermining initiatives to improve research evaluation.

Wider adoption of open science and reduced reliance on counterproductive, citation-based metrics are both key goals in the push to reform research assessment. However, in an article for Research Professional News, Ulrich Herb argues that flooding the market with open science metrics designed to incentivise researchers undermines the very reforms they are meant to promote.

Incentivising open science

Herb reports that while open science aims to improve transparency, accessibility, and collaboration in research, initiatives have struggled to gain traction with researchers. In a bid to push open science forward, advocates, research institutions, and funders have designed myriad new metrics to incentivise openness, including:

  • counting outputs such as open access publications, preprints, Findable Accessible Interoperable and Reusable (FAIR) datasets, data management plans, replication studies, and pre-registrations
  • measuring attention from downloads, citations, and media coverage
  • analysing social dimensions via collaborations, diversity, and citizen science activities.

New metrics are already the subject of extensive research and development in Europe.

Open science metrics undermine research assessment reform

Herb believes that open science metrics are experimental, fragmented, and lacking standardisation. Their dependence on quantitative measurement conflicts with the key principles of research evaluation reform, which promote qualitative, holistic assessment. Further, because open science metrics are used both to measure behaviour and influence it, they can encourage ‘metric-driven’ activities, such as using multiple data cuts to generate high numbers of FAIR-licensed datasets, or selecting diamond open access in favour of more appropriate journals. Finally, Herb argues, the current lack of clarity around precisely what open metrics are measuring renders them as counterproductive for research assessment as the citation-based metrics they are designed to replace.

“Because open science metrics are used both to measure behaviour and influence it, they can encourage ‘metric-driven’ activities.”

Using open science metrics as a force for good

Herb suggests that, if standardised, open science metrics could promote open science practices. At present, they risk creating a culture of incentivised behaviours that contradict the very ideals of open, fair, and meaningful research evaluation. The task ahead is to ensure that open science involves a genuine shift in how research is assessed.

————————————————–

What do you think – are open science metrics at odds with improving research evaluation?

]]>
https://thepublicationplan.com/2025/06/18/are-open-science-metrics-at-odds-with-research-assessment-reform/feed/ 0 17976
What do the public think of preprints? https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/ https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/#respond Wed, 14 May 2025 09:53:42 +0000 https://thepublicationplan.com/?p=17753

KEY TAKEAWAYS

  • Recent studies suggest that, even when provided with a definition, the general public remains unclear on what a preprint is.
  • The public’s perception of research credibility depends more on the broader framing of research findings than on disclosure of preprint status.

Decades after their introduction, preprints have become a well-established concept within the scientific community. Recent years have seen some publishers move entirely to a reviewed preprint model and organisations such as the ICMJE release updated guidance for authors and editors alike. But what about the public? While those in medical publishing have been debating how best to maintain the speed of preprints while introducing further checks and balances, findings reported in preprints are increasingly being picked up by general news outlets. In an article for Science, Jeffrey Brainard delved into the latest research on public understanding of preprints to examine the risks and benefits of this trend.

Preprint ‘disclaimers’ are not enough

As highlighted by Brainard, two recent studies suggest that – even when preprints are clearly labelled as such – public understanding of preprint status, and its potential implications for reported research, remains low.

In one study, researchers gave over 1,700 US adults adapted versions of real news articles describing preprint-reported study results. After reading the articles, just 30% of participants were able to define ‘preprint’ in a way that showed some understanding of the term. When students were excluded, this proportion almost halved.

Only 17% of the general public understand what a preprint is.

Some versions of the news articles included a definition of the term preprint and an explanation that the findings had not been peer reviewed. Surprisingly, this had little effect on the understanding of the general public, although it did improve students’ ability to define preprints.

Context matters

Another study found that rather than a simple disclosure of preprint status, the wider framing of the article had the most impact on public perception of research credibility. Stronger, more definitive language makes findings appear more trustworthy, while ‘hedging’ language reduces trust.

How to improve public understanding of preprints?

These findings suggest that disclosure of preprint status alone may not be enough to build public understanding. Dr Alice Fleerackers, co-author of both studies, argues that the scientific community must also do more to help the public understand how peer review works. Striking the right balance between speed and credibility of reporting seems likely to remain a key challenge for researchers and communicators.

————————————————–

Do you think research findings in preprints should be reported to the general public by news outlets?

]]>
https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/feed/ 0 17753