Peer review publication – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 26 Nov 2025 09:29:38 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Peer review publication – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Over 100 institutions back eLife’s reviewed preprint model https://thepublicationplan.com/2025/11/26/over-100-institutions-back-elifes-reviewed-preprint-model/ https://thepublicationplan.com/2025/11/26/over-100-institutions-back-elifes-reviewed-preprint-model/#respond Wed, 26 Nov 2025 09:29:37 +0000 https://thepublicationplan.com/?p=18438

KEY TAKEAWAY

  • More than 100 institutions have declared their support for eLife’s reviewed preprint model, following the journal’s loss of impact factor.

Rather than only accepting papers recommended for publication by peer reviewers, eLife publishes all reviewed research as reviewed preprints. However, Clarivate, the provider of Web of Science, only indexes peer reviewed content, resulting in the loss of eLife’s impact factor for 2025. Rather than changing their publishing model, eLife agreed to be partially indexed in Web of Science’s Emerging Sources Citation Index (ESCI). But how has this been received?

As reported in Research Information, eLife surveyed over 100 institutions and funders to assess how their publishing model is viewed. Over 95% of respondents endorsed non-traditional publishing approaches like eLife’s, confirming publications will continue to be factored into hiring, promotion, and funding decisions.

Promoting integrity or outdated metrics?

Dr Nandita Quaderi, Senior Vice President and Editor-in-Chief of the Web of Science at Clarivate, stressed that policies must be applied universally to protect research integrity. Quaderi warned that “cover-to-cover indexing of journals in which publication is decoupled from validation by peer review risks allowing untrustworthy actors to benefit from publishing poor quality content”.

On the other hand, Ashley Farley, Senior Officer of Knowledge & Research Services at the Gates Foundation, believes Web of Science’s policy “reinforces outdated publishing metrics that hinder innovation”, while Damian Pattinson, Executive Director at eLife, noted that with increasing emphasis on open science, “eLife remains confident that its model represents the future of scholarly publishing – one that prioritises scientific quality, transparency, and integrity over outdated prestige metrics”.

“eLife remains confident that its model represents the future of scholarly publishing – one that prioritises scientific quality, transparency, and integrity over outdated prestige metrics.”
– Damian Pattinson, eLife

As debates over the future of the impact factor continue, Farley believes that “indexers must evolve to support responsible, transparent models like eLife’s”.

—————————————————

Are journal impact factors important when deciding where to publish research?

]]>
https://thepublicationplan.com/2025/11/26/over-100-institutions-back-elifes-reviewed-preprint-model/feed/ 0 18438
Restoring trust in science: a proposed framework for verifying researcher identity https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/ https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/#respond Wed, 12 Nov 2025 14:46:39 +0000 https://thepublicationplan.com/?p=18386

KEY TAKEAWAYS 

  • The International Association of Scientific, Technical & Medical Publishers’ Research Identity Verification Framework aims to tackle fraudulent submissions, including from paper mills.
  • The framework of layered identity checks for researchers, peer reviewers, and editors aims to raise obstacles to misconduct and enhance transparency, while maintaining inclusivity for all authentic researchers.

Research is facing an unprecedented integrity challenge, with sophisticated paper mills publishing poor-quality and fraudulent papers by unverifiable researchers and fake personas. To combat this issue, the International Association of Scientific, Technical & Medical Publishers (STM) has developed a Research Identity Verification Framework, released for community review. In an interview with Retraction Watch, Hylke Koers, Chief Information Officer at STM, shared how the framework could be used by journals and institutions to verify the identity of researchers.

Why is the framework needed?

Currently, publishers rely on time-consuming manual checks to validate the identity of contributors such as authors, peer reviewers, or guest editors. These processes do not match the speed and organisation of fraudulent networks. Part of the problem lies in the ease with which untraceable digital identities can be created and used to manipulate key parts of the publishing pipeline, for example, suggesting a fake reviewer. New approaches are needed to tackle this growing issue.

How will the framework be used?

The framework introduces a layered, systemic method of identity verification. Suggested methods include asking individuals to:

  • validate an institutional email address
  • sign in via ORCiD or use ORCiD Trust Markers
  • provide a government document, such a passport or driving licence.

Koers notes that implementing these checks would make impersonation or identity theft more difficult and improve accountability, while multiple options for verification retain accessibility. Publishers are advised to assess the level of risk, asking “how confident can we be that this person is who they claim to be, and that the information they’ve provided is genuine?”.

Implementing these checks would make impersonation or identity theft more difficult and improve accountability”

What are the next steps?

The success of the Research Identity Verification Framework will rely on widespread adoption. The STM plans to collaborate with early adopters to develop practical implementation pathways and refine future recommendations.

Koers notes that ultimately, no framework can eliminate all fraud, but making it more difficult to act fraudulently and easier to trace and respond to publishing misconduct should have a positive impact.

—————————————————

Do you believe STM’s Research Identity Verification Framework will reduce academic fraud?

]]>
https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/feed/ 0 18386
The vital role of inclusive publishing in advancing science https://thepublicationplan.com/2025/09/17/the-vital-role-of-inclusive-publishing-in-advancing-science/ https://thepublicationplan.com/2025/09/17/the-vital-role-of-inclusive-publishing-in-advancing-science/#respond Wed, 17 Sep 2025 13:17:39 +0000 https://thepublicationplan.com/?p=18301

KEY TAKEAWAYS

  • Inclusive publishing recognises the value of all validated research in enhancing scientific reproducibility and progress.
  • Publishers must embrace inclusive practices to reflect diversity within the scientific landscape.

Inclusive journals value null results, preliminary data, and experimental design papers, which promote reproducibility and can hasten innovation. Unlike selective journals, which prioritise ‘high impact’ discoveries, inclusive journals recognise that research does not need to be ground-breaking to be an advancement. In a Springer Nature article, Ritu Dhand discusses the benefits of inclusive publishing.

COVID-19: a case study

Dhand highlights how the COVID-19 crisis created an unprecedented need for peer-reviewed science. Journals responded by adopting inclusive publishing practices, recognising the importance of preliminary data and innovative methods. The rapid dissemination of pilot studies and null results enabled scientists worldwide to focus precious time and effort on pushing unexplored frontiers. Inclusive publishing proved pivotal in an extraordinary global effort to compress drug discovery timelines from years to months. However, these inclusive practices faded after the pandemic.

The price of selectivity

Dhand notes that 50% of research is unpublished. Rather than lacking scientific rigour, most rejections occur because journal editors consider the research to lack significance. A study prepared for the European Commission estimated that in 2018, €26 billion was wasted on duplicated research in Europe alone.

50% of funded research is unpublished. Rather than lacking scientific rigour, most rejections occur because journal editors consider the research to lack significance.

Value beyond citation metrics

Inclusive journals often publish a high number of papers, leading to lower impact factors. However, the value of the research can be measured by other metrics. For example, over a third of Springer Nature’s inclusive content addresses the UN Sustainable Development Goals, demonstrating its societal impact.

Diversity in research publication

Inclusive publication practices also involve increasing the diversity of authors and countries contributing research. Dhand highlights that a similar proportion of research publications are from Western Nations and Asia, yet editorial boards and reviewers remain Western dominated. As key decision makers, individuals in these roles should reflect the diversity of the research communities.

Dhand acknowledges that selective journals will continue to offer a platform for ground-breaking research, but highlights the need for widespread inclusive publication practices to satisfy the evolving needs of science and society.

—————————————————

Do you believe selective publication practices are inhibiting scientific advancement and innovation?

]]>
https://thepublicationplan.com/2025/09/17/the-vital-role-of-inclusive-publishing-in-advancing-science/feed/ 0 18301
Difficulty assigning peer review is exacerbating publication delays: is it time for a new approach? https://thepublicationplan.com/2025/08/19/difficulty-assigning-peer-review-is-exacerbating-publication-delays-is-it-time-for-a-new-approach/ https://thepublicationplan.com/2025/08/19/difficulty-assigning-peer-review-is-exacerbating-publication-delays-is-it-time-for-a-new-approach/#respond Tue, 19 Aug 2025 14:11:27 +0000 https://thepublicationplan.com/?p=18241

KEY TAKEAWAYS

  • Challenges with securing peer reviewers may not be linked to a “shrinking reviewer pool” but underutilisation of the wider global pool.
  • New approaches, such as developing fit-for-purpose search tools, engaging junior experts, and offering viable compensation, may help journals source new peer reviewers.

Peer review is key to scientific integrity, so why is it becoming increasingly difficult for journals to secure peer reviewers? This topic was explored in a recent Springer Nature article authored by Arunas Radzvilavicius. The huge increase in peer review requests through the publication boom of the last 20 years has made it harder for journals to match peer reviewers. But does this reflect a shrinking reviewer pool?

In fact, the number of potential reviewers is growing at a faster rate than publications, according to Radzvilavicius. This suggests the ‘reviewer shortage’ is due to limitations in the methods for matching reviewers. Radzvilavicius describes barriers to securing peer reviewers:

  • repeat invitations to the same individuals
  • high reviewer workloads
  • distrust of commercial publishers
  • lack of viable incentives.

“Journals should tap into the global reviewer pool to address the ‘reviewer shortage’.”

Alternative approaches to finding reviewers

Radzvilavicius emphasises journals should tap into the global reviewer pool to address the ‘reviewer shortage’. Journals could:

  • Substitute Google Scholar for more advanced, impartial peer review tools. Radzvilavicius describes Google Scholar as a go-to method of sourcing reviewers, but its algorithms are unclear and prone to bias. Fit-for-purpose tools should be developed with global coverage, regular updates, automated invitation/acceptance rate tracking, and filters to avoid over-used reviewers.
  • Utilise AI. Automating time-intensive tasks, such as verifying statistics and ethics statements, through large language models would significantly reduce reviewers’ workloads.
  • Engage junior expert reviewers. Highlight the opportunities for career progression and acknowledgement that peer review offers, and provide workshops and networking events.
  • Introduce financial compensation. To address concerns that incentivising peer review may impact quality, Radzvilavicius argues that the opposite may be true: “paying for the service allows you to demand a high-quality product”.  

Radzvilavicius emphasises that there are “plenty of reviewers worldwide” – we just need better ways of finding them. Changing the approach could offer broad benefits, accelerating quality peer review.

—————————————————

Do you believe there is a shortage of suitable peer reviewers, impacting the speed of peer review?

]]>
https://thepublicationplan.com/2025/08/19/difficulty-assigning-peer-review-is-exacerbating-publication-delays-is-it-time-for-a-new-approach/feed/ 0 18241
Embracing AI in publishing: a game-changer for peer review? https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/ https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/#respond Tue, 04 Mar 2025 09:40:02 +0000 https://thepublicationplan.com/?p=17332

KEY TAKEAWAYS

  • Publishers are embracing the use of GenAI to support the peer review process.
  • AI automation of onerous tasks in the publishing workflow will allow editors to spend more time on activities requiring human expertise.

Could artificial intelligence (AI) define the future of publishing? Publishers are beginning to embrace the use of generative AI (GenAI) to improve peer review processes and uphold research integrity. In an article for Research Information, Dave Flanagan, Senior Director of Data Science at Wiley, explores how GenAI is currently used in publishing and how its integration is enhancing innovation and efficiency for both authors and reviewers alike.

A vigilant approach to GenAI use

Flanagan notes that “AI assists people, it does not replace people”. This is reflected in Wiley’s framework to ensure that their AI tools remain human driven to maintain the integrity of the publication process. Collaboration between publishers and industry bodies such as the Committee for Publication Ethics (COPE) and the STM Association will help to establish guidelines and standards for GenAI usage.

What is the current guidance on the use of GenAI in publishing?

Authors:

  • must explicitly state any usage of GenAI in their paper
  • are responsible for the accuracy of GenAI-driven information, including correct referencing of supporting material
  • can employ tools to improve grammar and spelling
  • are prohibited from using GenAI for the production or alteration of original research data and results.

Reviewers:

  • must not upload manuscripts or manuscript content into GenAI tools that could use input data for training purposes, breaching confidentiality agreements
  • are permitted to use GenAI tools to improve the quality of written feedback within reports, but must maintain transparency when doing so.

“Using AI tools can free up time for editors to focus on areas demanding human expertise.”

How can AI benefit peer review?

Similar to Papermill Alarm, Wiley’s AI-powered Papermill Detection Service is a useful tool for the early detection of potentially fraudulent papers. Other AI tools in development aim to:

  • identify suitable peer reviewers
  • automate alternative journal suggestions for unsuitable manuscripts
  • streamline the formatting and reference checking process
  • enhance the discoverability of published research.

Using AI tools can free up time for editors to focus on areas demanding human expertise.

In the rapidly evolving world of AI, Flanagan believes its use is “integral to the future of peer review”. The author urges publishers and researchers alike to embrace these powerful tools responsibly, keeping the advancement of knowledge at the core.  

————————————————–

Do you believe that additional AI tools will improve the peer review process?

]]>
https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/feed/ 0 17332
AI-accelerated innovation: how can publishers keep up? https://thepublicationplan.com/2025/01/15/ai-accelerated-innovation-how-can-publishers-keep-up/ https://thepublicationplan.com/2025/01/15/ai-accelerated-innovation-how-can-publishers-keep-up/#respond Wed, 15 Jan 2025 11:04:03 +0000 https://thepublicationplan.com/?p=17068

KEY TAKEAWAYS

  • AI use in scientific research is increasing both productivity and the size and complexity of datasets.
  • Adoption of AI tools by publishers could enable them to streamline the peer review process and safeguard against circulating flawed data.

Artificial intelligence (AI) is transforming scientific research and increasing productivity. But how can publishers keep up with the consequent surge in submissions, when peer reviewers are already at capacity and the current system may not be fit for purpose? In a recent article for the London School of Economics Impact Blog, Simone Ragavooloo calls on publishers to harness AI to:

Can AI-enabled peer review match increased scientific output?

The Organisation for Economic Cooperation and Development’s 2023 Artificial Intelligence in Science report states “raising the productivity of research could be the most economically and socially valuable of all the uses of AI”. To realise this potential, however, all steps of the research-to-publication process must align. Ragavooloo argues that publishers must “meet like with like”, utilising AI to streamline the peer review process. For example, Ragavooloo envisions AI doing the “heavy lifting” in areas like statistical analysis, where lack of expertise or statistical training can be limiting factors for reviewers. This would free up human reviewers to focus on aspects requiring greater human insight.

Protecting scientific discourse: can AI catch faulty data?

AI is producing increasingly large and complex datasets. This brings an increased risk of error, which, if unchecked, could lead to widespread dissemination of faulty big data. This prompts another role for AI: AI can identify methodological or statistical errors within vast quantities of information at a rate that is simply impossible for humans. While tools such as Frontiers’ Artificial Intelligence Review Assistant (AIRA) and the STM Integrity Hub are already available to help reviewers triage submitted articles, Ragavooloo believes there is still an unmet need for AI-assisted peer review applications, to ultimately prevent circulation of flawed data.

AI can identify methodological or statistical errors within vast quantities of information at a rate that is simply impossible for humans.

Looking ahead

While recognising we are in a transitional phase, Ragavooloo emphasises that publishers “have the scale and technological expertise” to develop more AI tools, calling on them to put their trust in AI and create “an open path forward” for AI-driven innovation.

————————————————–

What do you think – should publishers develop AI-assisted peer review tools?

]]>
https://thepublicationplan.com/2025/01/15/ai-accelerated-innovation-how-can-publishers-keep-up/feed/ 0 17068
eLife’s ‘reviewed preprint’ model: results from the first year https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/ https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/#respond Tue, 02 Jul 2024 15:09:53 +0000 https://thepublicationplan.com/?p=16156

KEY TAKEAWAYS

  • A year after the launch of their ‘reviewed preprint’ model, the journal eLife has released their key findings.
  • eLife report over 6,200 submissions, 2.5× faster time to publication, and no significant change in quality.

In January 2023, eLife made the radical decision to end the process of accepting or rejecting papers after peer review, in favour of publishing ‘reviewed preprints’. A year on, they have released their key findings.

What is the ‘reviewed preprint’ model?

In this model, all articles selected for peer review are published on the eLife website as a reviewed preprint alongside an eLife assessment, public reviews, and a response from the authors (if provided).

What are the key results?

In the first year, eLife report:

  • over 6,200 submissions received and more than 1,300 reviewed preprints published
  • over 2.5× faster time from submission to publication than the legacy model
  • no significant change in the quality of submissions (based on ratings for significance and strength of evidence)
  • quality of eLife assessments and public reviews rated highly by authors.

When the new model was launched, eLife reported that views across academic publishing were mixed, with concerns that:

  • authors would not submit their work
  • editors and reviewers would not want to be involved
  • articles would be of low quality or only from researchers with the most confidence in their work.

However, a year on, eLife consider the reality to be much more encouraging, highlighting how:

  • editors and reviewers have been able to focus on summarising the strengths and weaknesses of an article, with their views open for debate
  • authors and reviewers have been able to provide exchange without fear of articles being rejected
  • the majority of authors have revised their articles in response to reviewer comments, resulting in what eLife believe to be ‘better science all around’.

The majority of authors have revised their articles in response to reviewer comments, resulting in what eLife believe to be ‘better science all around’.

What’s next?

Going forward, eLife commit to continued evolution and adaptation. One proposal is to extend this approach to articles that may not typically be published by broad-interest journals, such as important negative or preliminary findings.

eLife welcome ideas to help them achieve these aims. They also encourage other publishers to adopt some aspects of their approach by making their software infrastructure freely available.

————————————————–

Would you be more likely to submit to eLife based on these results?

]]>
https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/feed/ 0 16156
Is it time to redesign peer review? https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/ https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/#respond Thu, 27 Apr 2023 17:04:30 +0000 https://thepublicationplan.com/?p=13665

KEY TAKEAWAYS

  • Breaking peer review into stages could decrease the burden on expert reviewers and improve the quality of published research.

Peer review is a key part of scholarly publishing; however, there have been increasing calls to shift away from the traditional peer review model to make the process more efficient and sustainable. In a Nature World View article, Professor Olavo B. Amaral describes an alternative approach to peer review that could improve data quality and transparency, and lessen the burden on peer reviewers.

Conventional peer review relies on expert referees to evaluate an article’s claims and its suitability for publication in the target journal. Due to time constraints, the underlying data are rarely scrutinised, potentially allowing errors and fraudulent results to go undetected.

Prof. Amaral believes that every manuscript should undergo basic checks to ensure that the data are complete and consistent, calculations are correct, and analyses are reproducible, but that only select articles, such as those of special interest, should be sent out for expert review. Such an approach would allow peer reviewers to use their time more effectively, on papers for which the data have been validated.

“Not all research needs to be reviewed by an expert. Much of the low hanging fruit of quality control doesn’t need a specialist — or even a human.”

Although certain aspects of manuscript quality control could be automated, algorithms work best on structured text, and most scientific fields do not have standardised formats for presenting results. A more fundamental problem is that data checks cannot verify that the data were collected as reported and have not been ‘cherry-picked’. To address this issue systematically, Prof. Amaral suggests that the focus should switch from scrutinising manuscripts to quality control of research practices, as proposed by frameworks such as Enhancing Quality in Preclinical Data (EQIPD). Implementing this change could not only make peer review more viable but could also improve data reproducibility and increase trust in published research.

Prof. Amaral calls on field experts to develop guidelines for data standardisation and urges funding agencies to facilitate the efforts to improve data collection and reporting by, for example, rewarding researchers for having specific aspects of their results certified.

—————————————————–

In your opinion, would breaking peer review into stages and employing algorithms for basic quality checks improve the sustainability of the current peer review system?

]]>
https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/feed/ 0 13665
Finding the way forward for peer review https://thepublicationplan.com/2023/03/30/finding-the-way-forward-for-peer-review/ https://thepublicationplan.com/2023/03/30/finding-the-way-forward-for-peer-review/#comments Thu, 30 Mar 2023 13:04:35 +0000 https://thepublicationplan.com/?p=13509

KEY TAKEAWAYS

  • The systems for finding, training, and incentivising peer reviewers may need to change to meet current demand.

Peer review has developed as a means of establishing quality control in research, but can current processes keep up with rapidly increasing research volumes? In a recent Nature Career Feature article, Amber Dance reported on the difficulties and ideas for overhauling the system, drawing on the experiences of a range of stakeholders in the peer review process.

Several issues with current peer review processes were raised:

  • It takes time. Aczel et al estimated that in 2020, reviewers worldwide spent over 130 million hours (nearly 15,000 years) reviewing articles.
  • It is often unpaid work. While this might reduce the risk of bias, it makes peer reviewing unfeasible for some.
  • Reviewers are becoming more selective about the work they are willing to take on. Some now only peer review for not-for-profit journals or preprints, where they focus on the science rather than suitability for a given journal.
  • There is underrepresentation of junior researchers and those from countries with less well-established research infrastructure.
  • It can be a slow process, sometimes resulting in delays to publication and the ability for research to shape policy, for example. In some cases, processes may even drive researchers to leave academia altogether.

Reviewers are becoming more selective about the work they are willing to take on. Some now only peer review for not-for-profit journals or preprints.

Dance explored opinions on how peer review could change, such as:

  • Incentives for researchers’ time. This might vary from a free journal subscription to the more controversial issue of journals paying for reviews. Other incentives might include giving more recognition to named peer reviewers.
  • Peer review training for early-career researchers and those in lower-income countries, to increase the pool and diversity of potential reviewers.
  • Increasing the use of technology to check aspects of statistics or methods, for example.
  • Reducing the number of reviews needed through increased screening of submissions prior to peer review, allowing authors to ‘recycle’ reviews for a related journal submission, or enabling submission of reviews collected before an initial submission (such as those from eLife reviewed preprints).

Drawing on these perspectives, many changes could be made to peer review – we look forward to seeing how processes may evolve in future.

—————————————————–

What would you most like to see change with peer review?

]]>
https://thepublicationplan.com/2023/03/30/finding-the-way-forward-for-peer-review/feed/ 1 13509
The reasons why publishing has become faster https://thepublicationplan.com/2023/02/28/the-reasons-why-publishing-has-become-faster/ https://thepublicationplan.com/2023/02/28/the-reasons-why-publishing-has-become-faster/#respond Tue, 28 Feb 2023 19:01:59 +0000 https://thepublicationplan.com/?p=13311

KEY TAKEAWAYS

  • Turnaround time is a key consideration during journal selection.
  • The average time from manuscript submission to publication has reduced by 36 days in the last 10 years, mainly thanks to gains at the production stage.

A journal’s turnaround time (TAT), consisting of the peer review and production stages, is key to authors during journal selection, only outranked in importance by journal’s reputation, Impact Factor, and readership. A recent article written by Christos Petrou for The Scholarly Kitchen reports that the average TAT has become shorter in the last decade, putting rigour of the peer review process in question.

TAT decreased from 199 days in 2011/12 to 163 days in 2019/20, with the reduction seen primarily at the production stage (23 days) but also at peer review (14 days).

Petrou assessed over 700,000 randomly selected papers published in more than 10,000 journals owned by 10 of the largest publishers. He found that the TAT decreased from 199 days in 2011/12 to 163 days in 2019/20, with the reduction seen primarily at the production stage (23 days) but also at peer review (14 days). Petrou suggests that the gains at the production stage were caused by:

  • shift to continuous online publishing model, whereby articles are posted as soon as they have completed production, ahead of their inclusion in a final print or online journal issue
  • rise in early publication of pre-Version of Record  articles (journal pre-proofs), which have yet to undergo copyediting, typesetting, and author corrections.

Of note, the acceleration of peer review was driven predominantly by 2 publishers, with the others showing a static or even deteriorating performance. Additional data from one of the two publishers with large gains in peer review speed demonstrated that high editorial standards could be maintained despite decreased TAT. This led Petrou to question whether the slow(ing) performance of other publishers could be the result of operational inefficiencies.

The article highlights the need for, and importance of, a central platform providing TAT metrics to help authors identify suitable journals for their publishing needs.

—————————————————–

In the field of biomedical sciences, what do you consider the ‘right’ publishing speed that enables rapid dissemination, editorial rigour, and quality outcome?

]]>
https://thepublicationplan.com/2023/02/28/the-reasons-why-publishing-has-become-faster/feed/ 0 13311