Transparency – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Tue, 12 Aug 2025 12:02:18 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Transparency – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Can adopting AI tools unlock a new era of open science? https://thepublicationplan.com/2025/08/12/can-adopting-ai-tools-unlock-a-new-era-of-open-science/ https://thepublicationplan.com/2025/08/12/can-adopting-ai-tools-unlock-a-new-era-of-open-science/#respond Tue, 12 Aug 2025 12:02:16 +0000 https://thepublicationplan.com/?p=18198

KEY TAKEAWAY

  • Generative AI tools can simplify data sharing through automating metadata creation and flagging missed requirements, ultimately enhancing open science.

Artificial intelligence (AI) has proved transformative in scientific research, from experimental design to assisting publishers and streamlining peer review processes. But can it unlock access to research data, code, and protocols frequently lost behind digital and institutional walls? In a recent London School of Economics Impact Blog article, Niki Scaplehorn and Henning Schoenenberger, both at Springer Nature, describe how generative AI could play a pivotal role in reshaping how data are shared, potentially revolutionising open science.

Hurdles to data sharing

The COVID-19 pandemic marked a turning point for open science, with global collaboration and rapid data sharing accelerating breakthroughs. Yet, Scaplehorn and Schoenenberger highlight that there are still considerable challenges to data sharing:

  • a lack of consistent guidance and struggles to align with FAIR standards
  • confusing and overlapping data sharing policies
  • cultural barriers
  • a lack of recognition for data sharing, code publication, and protocol documentation in academia.

Springer Nature saw compliance with data sharing requirements jump from 51% to 87% simply by asking authors to justify why they hadn’t deposited data prior to article acceptance. Scaling this approach, however, demands time and manpower. According to Scaplehorn and Schoenenberger, here, generative AI shows potential.

How can AI benefit data sharing?

The authors call for a “product” mindset that treats AI open science tools as services designed around researchers’ needs, rather than top-down mandates or administrative burdens. Scaplehorn and Schoenenberger highlight that AI can benefit data sharing through:

  • automation of metadata creation
  • flagging missing documentation and overlooked requirements
  • suggesting best practices to improve workflows.

“Generative AI could play a pivotal role in reshaping how data are shared, potentially revolutionising open science.”

The path forward

Scaplehorn and Schoenenberger believe that adopting AI tools designed around authors’ needs will streamline the burdensome aspects of data sharing. Ultimately, this will benefit researchers, policymakers, and everyone who relies on access to scientific information through lowering the barriers to open science.

—————————————————

What do you think – can AI be used to increase data sharing?

]]>
https://thepublicationplan.com/2025/08/12/can-adopting-ai-tools-unlock-a-new-era-of-open-science/feed/ 0 18198
Seeing the full picture: the RIVA-C checklist for research infographics https://thepublicationplan.com/2025/06/26/seeing-the-full-picture-the-riva-c-checklist-for-research-infographics/ https://thepublicationplan.com/2025/06/26/seeing-the-full-picture-the-riva-c-checklist-for-research-infographics/#respond Thu, 26 Jun 2025 07:24:23 +0000 https://thepublicationplan.com/?p=18038

KEY TAKEAWAY

  • The RIVA-C checklist helps to create clear, accurate, and standardised infographics and avoid misinterpretation of results of comparative studies.

In the evolving landscape of scientific communication, visual tools such as infographics and visual abstracts are increasingly used to present research findings. While they offer quick and accessible summaries, concerns have emerged about their accuracy, clarity, and completeness – especially when used to convey complex comparative studies. To address these challenges, Joshua R. Zadro and colleagues developed the Reporting Infographics and Visual Abstracts of Comparative studies (RIVA-C) checklist and guide, a tool designed to improve the quality and reliability of infographics summarising comparative studies of health and medical interventions.

Why was RIVA-C developed?

Studies have shown that infographics can reduce full-text views, as readers turn to the infographic for a quick summary rather than reading the full article. However, infographics do not always include all the details needed to fully understand a study, increasing the risk of misinterpretation. The authors argue that previous infographic guidelines were either not rigorously developed or focused mainly on formatting and design.

Previous infographic guidelines were either not rigorously developed or focused mainly on formatting and design rather than content quality.

How was RIVA-C developed?

The checklist was developed through a structured consensus process involving 92 participants from a range of professional backgrounds. This process was led by an international Steering Group to ensure diversity of input and methodological robustness.

The RIVA-C checklist

The full checklist  comprises 10 items across 3 categories: (1) study characteristics, (2) results, and (3) conclusions/takeaway message—each accompanied by detailed explanations and examples to aid practical implementation. The checklist was piloted over a 6-month period to evaluate its clarity, relevance, and usability.

The future of RIVA-C

RIVA-C aims to enhance the transparency and completeness of infographic reporting, reducing the risk of misinterpretation—especially in the context of influential studies like randomised controlled trials and systematic reviews.

The authors recommend that journals endorse RIVA-C, similar to other checklists listed on the EQUATOR Network, by including a link and relevant information on their “instructions for authors” page. They also stress that evaluating the implementation of RIVA-C will be essential to inform future modifications to the checklist, ultimately increasing its impact.

RIVA-C may provide a path to improving the clarity and integrity of comparative study infographics. The Steering Group also hopes RIVA-C will lead to the creation of similar checklists in other areas of healthcare research.

————————————————–

Do you think the RIVA-C checklist will improve the quality of infographics?

]]>
https://thepublicationplan.com/2025/06/26/seeing-the-full-picture-the-riva-c-checklist-for-research-infographics/feed/ 0 18038
What do the public think of preprints? https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/ https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/#respond Wed, 14 May 2025 09:53:42 +0000 https://thepublicationplan.com/?p=17753

KEY TAKEAWAYS

  • Recent studies suggest that, even when provided with a definition, the general public remains unclear on what a preprint is.
  • The public’s perception of research credibility depends more on the broader framing of research findings than on disclosure of preprint status.

Decades after their introduction, preprints have become a well-established concept within the scientific community. Recent years have seen some publishers move entirely to a reviewed preprint model and organisations such as the ICMJE release updated guidance for authors and editors alike. But what about the public? While those in medical publishing have been debating how best to maintain the speed of preprints while introducing further checks and balances, findings reported in preprints are increasingly being picked up by general news outlets. In an article for Science, Jeffrey Brainard delved into the latest research on public understanding of preprints to examine the risks and benefits of this trend.

Preprint ‘disclaimers’ are not enough

As highlighted by Brainard, two recent studies suggest that – even when preprints are clearly labelled as such – public understanding of preprint status, and its potential implications for reported research, remains low.

In one study, researchers gave over 1,700 US adults adapted versions of real news articles describing preprint-reported study results. After reading the articles, just 30% of participants were able to define ‘preprint’ in a way that showed some understanding of the term. When students were excluded, this proportion almost halved.

Only 17% of the general public understand what a preprint is.

Some versions of the news articles included a definition of the term preprint and an explanation that the findings had not been peer reviewed. Surprisingly, this had little effect on the understanding of the general public, although it did improve students’ ability to define preprints.

Context matters

Another study found that rather than a simple disclosure of preprint status, the wider framing of the article had the most impact on public perception of research credibility. Stronger, more definitive language makes findings appear more trustworthy, while ‘hedging’ language reduces trust.

How to improve public understanding of preprints?

These findings suggest that disclosure of preprint status alone may not be enough to build public understanding. Dr Alice Fleerackers, co-author of both studies, argues that the scientific community must also do more to help the public understand how peer review works. Striking the right balance between speed and credibility of reporting seems likely to remain a key challenge for researchers and communicators.

————————————————–

Do you think research findings in preprints should be reported to the general public by news outlets?

]]>
https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/feed/ 0 17753
Embracing AI in publishing: a game-changer for peer review? https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/ https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/#respond Tue, 04 Mar 2025 09:40:02 +0000 https://thepublicationplan.com/?p=17332

KEY TAKEAWAYS

  • Publishers are embracing the use of GenAI to support the peer review process.
  • AI automation of onerous tasks in the publishing workflow will allow editors to spend more time on activities requiring human expertise.

Could artificial intelligence (AI) define the future of publishing? Publishers are beginning to embrace the use of generative AI (GenAI) to improve peer review processes and uphold research integrity. In an article for Research Information, Dave Flanagan, Senior Director of Data Science at Wiley, explores how GenAI is currently used in publishing and how its integration is enhancing innovation and efficiency for both authors and reviewers alike.

A vigilant approach to GenAI use

Flanagan notes that “AI assists people, it does not replace people”. This is reflected in Wiley’s framework to ensure that their AI tools remain human driven to maintain the integrity of the publication process. Collaboration between publishers and industry bodies such as the Committee for Publication Ethics (COPE) and the STM Association will help to establish guidelines and standards for GenAI usage.

What is the current guidance on the use of GenAI in publishing?

Authors:

  • must explicitly state any usage of GenAI in their paper
  • are responsible for the accuracy of GenAI-driven information, including correct referencing of supporting material
  • can employ tools to improve grammar and spelling
  • are prohibited from using GenAI for the production or alteration of original research data and results.

Reviewers:

  • must not upload manuscripts or manuscript content into GenAI tools that could use input data for training purposes, breaching confidentiality agreements
  • are permitted to use GenAI tools to improve the quality of written feedback within reports, but must maintain transparency when doing so.

“Using AI tools can free up time for editors to focus on areas demanding human expertise.”

How can AI benefit peer review?

Similar to Papermill Alarm, Wiley’s AI-powered Papermill Detection Service is a useful tool for the early detection of potentially fraudulent papers. Other AI tools in development aim to:

  • identify suitable peer reviewers
  • automate alternative journal suggestions for unsuitable manuscripts
  • streamline the formatting and reference checking process
  • enhance the discoverability of published research.

Using AI tools can free up time for editors to focus on areas demanding human expertise.

In the rapidly evolving world of AI, Flanagan believes its use is “integral to the future of peer review”. The author urges publishers and researchers alike to embrace these powerful tools responsibly, keeping the advancement of knowledge at the core.  

————————————————–

Do you believe that additional AI tools will improve the peer review process?

]]>
https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/feed/ 0 17332
Are conflicts of interest reported transparently in healthcare guidelines? https://thepublicationplan.com/2024/11/07/are-conflicts-of-interest-reported-transparently-in-healthcare-guidelines/ https://thepublicationplan.com/2024/11/07/are-conflicts-of-interest-reported-transparently-in-healthcare-guidelines/#respond Thu, 07 Nov 2024 08:24:35 +0000 https://thepublicationplan.com/?p=16753

KEY TAKEAWAYS

  • RIGHT-COI&F guides transparent reporting of COIs and funding in healthcare guidelines and policy documents of guideline organisations.
  • The checklist can also be used to assess the quality and completeness of reporting in published guidelines.

Healthcare guidelines substantially influence clinical practice and policy and are developed through extensive analysis and decision-making. Amid broader issues with accurate disclosure in medical publishing, a recent Annals of Internal Medicine article by Yangqin Xun and colleagues highlighted that while guidelines are especially sensitive to conflicts of interest (COIs) and funder influence, disclosure is generally poor.

Clear and complete reporting of COIs and funding is crucial for credibility and is monitored as a key open science indicator. Yet existing checklists, such as Reporting Items for practice Guidelines in HealThcare (RIGHT), often lack detail on how to report COIs and funding. Xun et al. aimed to address this, building on RIGHT to develop a COI- and funding-specific extension. RIGHT-COI&F can be used both while developing healthcare guidelines and to assess completeness of COI and funding reporting.

RIGHT-COI&F can be used both while developing healthcare guidelines and to assess completeness of COI and funding reporting.

Checklist development

RIGHT-COI&F development followed the recommendations of the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, based on a published protocol. Key steps were:

  • establishing working groups, including an expert panel
  • generating an initial checklist based on existing materials and a stakeholder survey
  • agreeing checklist items through surveying experts and consensus meetings
  • refining and testing the checklist.

RIGHT-COI&F items: policy and implementation

RIGHT-COI&F has 27 items, 18 focused on COIs and 9 on funding. Most items are related to policy and include:

  • defining the types of interest to be disclosed (eg, based on relevance, financial amount, or time period) and by whom
  • how accuracy and completeness is verified
  • processes for determining whether interests are conflicts
  • strategies to manage COIs
  • whether accepting funding from certain sources is restricted.

Organisational policies may fulfil these items, alleviating the need for detailed descriptions in individual guidelines.

The remaining items relate to implementation in individual projects, such as ensuring that declared interests are reported in detail, alongside the funding received (and the role of funders).

Next steps

To promote adoption, the authors plan to translate RIGHT-COI&F into multiple languages, disseminate it through academic networks, and seek endorsement by medical journals. Further assessment of real-life feasibility and impact is planned. We look forward to seeing how RIGHT-COI&F helps uphold transparency and trust in the healthcare space.

————————————————–

What do you think – will the RIGHT-COI&F checklist improve the transparency and credibility of guidelines?

]]>
https://thepublicationplan.com/2024/11/07/are-conflicts-of-interest-reported-transparently-in-healthcare-guidelines/feed/ 0 16753
Publisher policies on AI use: is it time for change? https://thepublicationplan.com/2024/10/10/publisher-policies-on-ai-use-is-it-time-for-change/ https://thepublicationplan.com/2024/10/10/publisher-policies-on-ai-use-is-it-time-for-change/#respond Thu, 10 Oct 2024 15:12:10 +0000 https://thepublicationplan.com/?p=16478

KEY TAKEAWAYS

  • The increasing use of AI tools in academic publishing calls for policies that keep pace with the myriad ways that authors and researchers use AI.
  • An AI risk register that looks at specific risks inherent in individual tools and the ways they are used, plus collaboration among publishers to create standardised guidance, could be the key.

Protecting the integrity of the scientific record becomes more challenging as the role of AI in academic publishing expands. In a recent article for The Scholarly Kitchen, Avi Staiman expresses his concerns about the lack of adequate publisher policies on AI use and sets out  what publishers could do to step up their game.

Where do current policies come up short?

Staiman reports that while authors are eager to implement AI, most lack the expertise to navigate its full potential while protecting research integrity. For instance, Oxford University Press (OUP) reported that 76% of researchers use AI in their research, but 72% are also unaware of their institution’s policies on AI.

76% of researchers use AI in their research, but 72% are also unaware of their institution’s policies on AI.

Alongside this, publishers’ struggles to keep up to date with the latest developments in AI hamper the development of suitable guidelines. Limitations of current policies include:

  • lack of clarity on the roles of authors versus AI in individual cases (for example, who created the content vs who refined it)
  • failure to consider the wide range of available AI tools and their differing uses (substantive vs non-substantive AI use)
  • oversimplified AI policies that equate to blanket disclosure statements on the use of AI only, rather than looking at what was used and how.

Staiman argues that, given the diversity of AI tools that now exist — from those capable of performing statistical analysis, such as JuliusAI, to those assisting with literature searches, like Scite — the ways in which we tackle transparency and regulation need to evolve.

How can publisher AI policies keep pace with AI technology?

To this end, and inspired by the EU AI Act, Staiman suggests formulating an ‘AI risk register’ that assigns  AI tools a level of regulation that matches both the potential risk inherent in that tool and the way it is being used in research. He also recommends 8 practical actions for publishers:

  1. Develop standardised guidelines
  2. Update guidelines continuously
  3. Establish transparent and inclusive governance
  4. Boost learning on AI within individual organisations
  5. Assign different risk levels to AI tools
  6. Classify AI tools based on the type of use level of verification required
  7. Define clear roles for authors and AI
  8. Consider how to monitor and enforce AI policies

Staiman calls upon publishers to rapidly collaborate so that AI policies can keep pace with the fast-moving changes in AI technology.

————————————————–

What do you think – are current publisher policies on AI use robust enough to ensure research integrity?

]]>
https://thepublicationplan.com/2024/10/10/publisher-policies-on-ai-use-is-it-time-for-change/feed/ 0 16478
EQUATOR and COS join forces to bring open science to the fore https://thepublicationplan.com/2024/09/17/equator-and-cos-join-forces-to-bring-open-science-to-the-fore/ https://thepublicationplan.com/2024/09/17/equator-and-cos-join-forces-to-bring-open-science-to-the-fore/#respond Tue, 17 Sep 2024 13:51:34 +0000 https://thepublicationplan.com/?p=16454

KEY TAKEAWAYS

  • A partnership between the EQUATOR Network and the Centre for Open Science (COS) could further the objectives of both organisations and raise awareness of best practices for open science.
  • Anticipated activities include educational outreach for researchers and updated reporting guidelines.

The open science movement aims to improve the transparency, accessibility, and reproducibility of scientific research. In May this year, the EQUATOR Network and Center for Open Science (COS) announced a 3-year collaboration in the hopes of accelerating the uptake of open science practices in health research through a series of shared activities.

A shared mission

Since launching the open science framework in 2012 – a project management tool designed to streamline collaboration on, and dissemination of, scientific research – COS have been on a mission to facilitate and incentivise open research practices. This approach is highly complementary to EQUATOR’s objective to improve research quality and transparency, leading the organisations to collaborate on development of the Transparency and Openness Promotion Guidelines in 2015.

Nearly a decade later, the two are joining forces officially.

What can we expect?

Planning is ongoing, but several potential strategies are being explored:

  • Educating researchers on processes such as writing and protocol creation, through a combination of outreach materials and toolkits
  • Developing toolkits to guide reviewers in assessing data sharing practices and protocol deviation
  • Increasing the visibility and use of existing tools, such as COS registration templates and EQUATOR reporting guidelines, through shared hosting
  • Integrating practices such as protocol posting, data sharing, and study replication into existing EQUATOR reporting guidelines, where these are not yet included.

In particular, COS is keen to utilise EQUATOR’s existing systems to enhance research credibility by promoting the uptake of preregistration.

The potential impact

Open science practices are already included in CONSORT, but inclusion in further reporting guidelines could scale-up adoption substantially. In addition, the robustness of EQUATOR’s reporting standards could offer further structure and visibility to COS’ ongoing research.

Director of the EQUATOR Network, David Moher, has expressed his excitement around the partnership:

Since its inception in 2006, the EQUATOR Network has worked hard to help improve comprehensive and transparent reporting of research. Collaborating with COS will help further achieve this objective.”

————————————————–

Do you think open science practices should be included in reporting guidelines?

]]>
https://thepublicationplan.com/2024/09/17/equator-and-cos-join-forces-to-bring-open-science-to-the-fore/feed/ 0 16454
The paper mill problem: are AI tools the answer? https://thepublicationplan.com/2024/08/01/the-paper-mill-problem-are-ai-tools-the-answer/ https://thepublicationplan.com/2024/08/01/the-paper-mill-problem-are-ai-tools-the-answer/#respond Thu, 01 Aug 2024 16:00:21 +0000 https://thepublicationplan.com/?p=16249

KEY TAKEAWAYS

  • In a test run, a new AI-based system developed by scientific publisher Wiley flagged 10–13% of submitted manuscripts as potential fakes.
  • Generative AI tools could help combat the threat posed by paper mills to research integrity.

An AI-based service designed to detect bogus scientific articles flagged 10–13% of submitted manuscripts in a pilot run, according to a blog post by Ivan Oransky for Retraction Watch. The fake papers were caught by publisher Wiley’s Papermill Detection service, which screens submissions ahead of editorial review. The multi-tool system is a promising development in ongoing efforts to ensure the integrity of published research.

Spotting fake articles

Paper mills are paid to produce fake research papers, which can appear very similar to genuine manuscripts. According to Wiley, its new system uses 6 different approaches to identify what it calls “potentially compromised research content”:

  • checking for similarity with existing paper mill papers
  • flagging the use of “tortured phrases
  • identifying authors with unusual publication behaviour
  • verifying the identity of researchers
  • detecting potential misuse of generative AI
  • checking that manuscripts fall within a journal’s scope.

The test run involved over 270 Wiley journals, which rejected between 600–1,000 submitted manuscripts per month once they started using the tool. A spokesperson for Wiley told Retraction Watch that flagged papers would not automatically be rejected, but would be considered by an editor before being processed further. The publisher says it is partnering with Sage and IEEE for its next testing phase, and aims to roll out the service as early as next year.

The test run involved over 270 Wiley journals, which rejected between 600–1,000 submitted manuscripts per month once they started using the tool.

Paper mill problems

Paper mills are a major source of articles that end up being retracted after publication. Most manuscripts retracted in 2023 were published by Hindawi, a subsidiary of Wiley, with a high proportion involving Chinese authors. This lead to a government-initiated review that required all university researchers in China to declare their retracted papers.

Last year, Wiley closed 4 Hindawi journals due to paper mill issues and announced that it will stop using the Hindawi brand. Wiley has since discontinued another 19 journals overseen by Hindawi, which it said was due to portfolio integration.

Possible solutions on the horizon

Investigations into retractions should help ensure the integrity of published research, but there is growing interest in using new tools such as Papermill Alarm to help stop fake papers getting published in the first place. Wiley say their new service will complement the STM Integrity Hub, a resource developed by academic publishers that incorporates Papermill Alarm and other tools to help combat fake science.

While much discussion around developments in AI has focused on possible threats to research integrity, spotting bogus manuscripts could be an area where AI could help restore trust in published science.

————————————————–

Will AI tools that spot fake manuscripts drive paper mills out of business?

]]>
https://thepublicationplan.com/2024/08/01/the-paper-mill-problem-are-ai-tools-the-answer/feed/ 0 16249
Building trust: ACCORD guidelines for reporting consensus methods https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/ https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/#respond Tue, 09 Jul 2024 10:48:23 +0000 https://thepublicationplan.com/?p=16181

KEY TAKEAWAY

  • The ACCORD reporting guidelines comprise a 35-item checklist that aims to improve the transparency of reporting on consensus methods.

The COVID-19 pandemic highlighted the need for effective knowledge-sharing to guide healthcare decisions. In rapidly evolving situations, reaching consensus among experts from diverse backgrounds is crucial, especially when evidence is emergent or inconsistent. This process is best achieved using formal consensus methods.

Despite their critical role in healthcare and policy decision-making, consensus methods are often inadequately reported, leading to inconsistencies and lack of transparency. To address these issues, the ACcurate COnsensus Reporting Document (ACCORD) project was established to develop comprehensive guidelines for reporting the numerous consensus methods used in medical research.

The ACCORD reporting guidelines aim to enhance trust in the recommendations made by consensus panels, benefiting authors, journal editors, reviewers, and, ultimately, patients through more reliable healthcare recommendations.

The ACCORD checklist was formulated using the EQUATOR Network’s methodology for developing reporting guidelines, with the full study protocol published in Research Integrity and Peer Review. The project began with a systematic review, followed by 3 rounds of the Delphi process and several steering committee meetings. To ensure a comprehensive perspective, a diverse panel was engaged, comprising 72 participants from 6 continents and various professional backgrounds, including clinical, research, policy, and patient advocacy. Through this rigorous process, a preliminary checklist was refined to a final list of 35 essential items covering all sections of a manuscript.

The ACCORD reporting guidelines aim to enhance trust in recommendations made by consensus panels, benefiting authors, journal editors, reviewers, and ultimately patients through more reliable healthcare recommendations.

————————————————–

What do you think – will the ACCORD guidelines improve the transparency of reporting on consensus methods?

]]>
https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/feed/ 0 16181
eLife’s ‘reviewed preprint’ model: results from the first year https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/ https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/#respond Tue, 02 Jul 2024 15:09:53 +0000 https://thepublicationplan.com/?p=16156

KEY TAKEAWAYS

  • A year after the launch of their ‘reviewed preprint’ model, the journal eLife has released their key findings.
  • eLife report over 6,200 submissions, 2.5× faster time to publication, and no significant change in quality.

In January 2023, eLife made the radical decision to end the process of accepting or rejecting papers after peer review, in favour of publishing ‘reviewed preprints’. A year on, they have released their key findings.

What is the ‘reviewed preprint’ model?

In this model, all articles selected for peer review are published on the eLife website as a reviewed preprint alongside an eLife assessment, public reviews, and a response from the authors (if provided).

What are the key results?

In the first year, eLife report:

  • over 6,200 submissions received and more than 1,300 reviewed preprints published
  • over 2.5× faster time from submission to publication than the legacy model
  • no significant change in the quality of submissions (based on ratings for significance and strength of evidence)
  • quality of eLife assessments and public reviews rated highly by authors.

When the new model was launched, eLife reported that views across academic publishing were mixed, with concerns that:

  • authors would not submit their work
  • editors and reviewers would not want to be involved
  • articles would be of low quality or only from researchers with the most confidence in their work.

However, a year on, eLife consider the reality to be much more encouraging, highlighting how:

  • editors and reviewers have been able to focus on summarising the strengths and weaknesses of an article, with their views open for debate
  • authors and reviewers have been able to provide exchange without fear of articles being rejected
  • the majority of authors have revised their articles in response to reviewer comments, resulting in what eLife believe to be ‘better science all around’.

The majority of authors have revised their articles in response to reviewer comments, resulting in what eLife believe to be ‘better science all around’.

What’s next?

Going forward, eLife commit to continued evolution and adaptation. One proposal is to extend this approach to articles that may not typically be published by broad-interest journals, such as important negative or preliminary findings.

eLife welcome ideas to help them achieve these aims. They also encourage other publishers to adopt some aspects of their approach by making their software infrastructure freely available.

————————————————–

Would you be more likely to submit to eLife based on these results?

]]>
https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/feed/ 0 16156