Preprint – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 14 May 2025 10:22:21 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Preprint – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 What do the public think of preprints? https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/ https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/#respond Wed, 14 May 2025 09:53:42 +0000 https://thepublicationplan.com/?p=17753

KEY TAKEAWAYS

  • Recent studies suggest that, even when provided with a definition, the general public remains unclear on what a preprint is.
  • The public’s perception of research credibility depends more on the broader framing of research findings than on disclosure of preprint status.

Decades after their introduction, preprints have become a well-established concept within the scientific community. Recent years have seen some publishers move entirely to a reviewed preprint model and organisations such as the ICMJE release updated guidance for authors and editors alike. But what about the public? While those in medical publishing have been debating how best to maintain the speed of preprints while introducing further checks and balances, findings reported in preprints are increasingly being picked up by general news outlets. In an article for Science, Jeffrey Brainard delved into the latest research on public understanding of preprints to examine the risks and benefits of this trend.

Preprint ‘disclaimers’ are not enough

As highlighted by Brainard, two recent studies suggest that – even when preprints are clearly labelled as such – public understanding of preprint status, and its potential implications for reported research, remains low.

In one study, researchers gave over 1,700 US adults adapted versions of real news articles describing preprint-reported study results. After reading the articles, just 30% of participants were able to define ‘preprint’ in a way that showed some understanding of the term. When students were excluded, this proportion almost halved.

Only 17% of the general public understand what a preprint is.

Some versions of the news articles included a definition of the term preprint and an explanation that the findings had not been peer reviewed. Surprisingly, this had little effect on the understanding of the general public, although it did improve students’ ability to define preprints.

Context matters

Another study found that rather than a simple disclosure of preprint status, the wider framing of the article had the most impact on public perception of research credibility. Stronger, more definitive language makes findings appear more trustworthy, while ‘hedging’ language reduces trust.

How to improve public understanding of preprints?

These findings suggest that disclosure of preprint status alone may not be enough to build public understanding. Dr Alice Fleerackers, co-author of both studies, argues that the scientific community must also do more to help the public understand how peer review works. Striking the right balance between speed and credibility of reporting seems likely to remain a key challenge for researchers and communicators.

————————————————–

Do you think research findings in preprints should be reported to the general public by news outlets?

]]>
https://thepublicationplan.com/2025/05/14/what-do-the-public-think-of-preprints/feed/ 0 17753
What does the future hold for preprints: credibility vs accessibility? https://thepublicationplan.com/2025/03/25/what-does-the-future-hold-for-preprints-credibility-vs-accessibility/ https://thepublicationplan.com/2025/03/25/what-does-the-future-hold-for-preprints-credibility-vs-accessibility/#respond Tue, 25 Mar 2025 09:08:58 +0000 https://thepublicationplan.com/?p=17503

KEY TAKEAWAYS

  • ScholCommLab research shows that preprint servers are implementing more moderation measures as they attempt to improve preprint credibility.
  • The authors warn against compromising the very attributes that make preprints invaluable, namely “speed, accessibility, and low barriers to entry”.

A recent article by the London School of Economics examined the challenges associated with enhancing preprint credibility. Research by ScholCommLab suggests that attempts to mitigate the dissemination of unchecked content through increased moderation may risk undermining the accessibility and speed that make preprints such a valuable method of sharing scientific information.

Preprint credibility concerns

The authors remind us of how preprints emerged as an essential tool for the rapid dissemination of new information throughout the COVID-19 pandemic. While preprints were covered by the media at “an unprecedented rate” during that time, journalists are now being more selective about their use due to concerns around lack of peer review. Arguably, one of the most significant barriers to broader preprint adoption is the concept that they are of lesser quality and not as reliable as peer-reviewed articles. Critics also question their potential for circulating misinformation, which ultimately damages public trust in science.

While the introduction of credibility measures may boost preprint adoption, the authors warn that this may come at a price.

Measures to improve preprint credibility

ScholCommLab’s findings from interviews with preprint server managers strongly refute any claims that servers allow the spread of unchecked information. Rather, they have “a strong sense of responsibility toward their communities, the scholarly record, and the public” and feel under pressure to screen preprints for flawed content that could be misleading. As such, servers are introducing more and more measures to address concerns over credibility, including:

The downsides of increased moderation

While the introduction of credibility measures may boost preprint adoption, the authors warn that this may come at a price, such as by:

  • restricting preprints to manuscripts or other formats congruent with journal peer review
  • slowing the availability of new research
  • reducing economic viability
  • undermining the core strengths associated with preprints (ie, “openness, flexibility, and accessibility”)
  • excluding “disadvantaged researchers”, such as those at the beginning of their career and/or at less established institutions.

The authors emphasise the importance of ensuring that preprints’ benefits are not diminished, and ask the community to consider the implications of gatekeeping methods, particularly in relation to future global health crises.

————————————————–

Do preprints need more moderation?

]]>
https://thepublicationplan.com/2025/03/25/what-does-the-future-hold-for-preprints-credibility-vs-accessibility/feed/ 0 17503
The evolution of evaluation: Richard Sever on the future of peer review https://thepublicationplan.com/2024/09/10/the-evolution-of-evaluation-richard-sever-on-the-future-of-peer-review/ https://thepublicationplan.com/2024/09/10/the-evolution-of-evaluation-richard-sever-on-the-future-of-peer-review/#respond Tue, 10 Sep 2024 12:28:25 +0000 https://thepublicationplan.com/?p=16405

Peer review is fundamental to the evaluation of biomedical research, ensuring the rigour and credibility of published scientific findings. However, the system is under mounting pressure due to the sheer volume of research being conducted, and the quality and timeliness of research evaluation is increasingly at stake. Richard Sever, co-founder of the bioRxiv and medRxiv preprint servers, is at the forefront of efforts to innovate in this space. We spoke with Richard to discuss his vision for the future of peer review, exploring how preprints and evolving evaluation methods might address the challenges facing scientific publishing today.

You recently participated in a session on the future of peer review at the ISMPP Annual Meeting. Do you believe that the existing peer review model effectively meets the needs of the scientific community, particularly in biomedical and clinical research? If there is room for improvement, what are the main deficiencies of the current system and what can be done to address them?

“I do think there’s room for improvement. When we say peer review, often what we mean is a broader picture that includes the editorial and administrative checks that a journal does, as well as the formal review by peers. That’s where things vary a lot – there are some journals that are incredibly responsible and do a very good job, and we know that there are some where it’s peer review in name only, most obviously the predatory journals. But there’s a spectrum, so there’s a lot of opportunity to improve the process. Part of that might be making different choices for different types of article. For example, for papers where there’s patient involvement, there needs to be far more stringent scrutiny than for a basic research paper. Patient consent for publication, deidentification of patient data, you can’t really expect peer reviewers to do those kinds of checks; you expect the journal to do them. In recent years, I’ve become more concerned about these editorial checks than peer review per se, because opinions will differ on the quality of manuscripts and its clearly not the case that the three people who peer review a paper are a representative sample of everybody who could review it; however, the integrity checks that a journal performs may ultimately be more important. Different journals cover different subjects though, so maybe they can approach things differently. A journal dealing with a high volume of basic research papers, for example, may not need to worry as much about certain checks. This is where we start considering the benefits of peer review, and in some cases, it may be better done after publication, leading to a more multidimensional, ongoing process. On the other hand, for a vaccine study, you may want a very thorough peer review before it goes out into the world, depending on the results.”

“…there’s a lot of opportunity to improve peer review. Part of that might be making different choices for different types of article.”

You co-founded the medRxiv preprint server for health science research in 2019. How and where do preprint servers fit into the existing peer review model? Has that positioning evolved in the years since medRxiv was launched?

“The clear thing about preprint servers is that they’re decoupling research dissemination from research evaluation and specifically from peer review evaluation. What has become very clear both in the basic science space and in the clinical space is that you can do this so long as you responsibly put out preprints and make it clear that these are authors’ claims and they have not been verified. This is a good thing, because it acclimatises people to the fact that science can be a bit messy and just because somebody has put something out there, it doesn’t mean it’s necessarily valid. Preprints have demonstrated that you can do this decoupling, which then allows us to have a conversation about what the evaluation should look like. There are checks you can do very quickly at a preprint server: Does this paper look like it’s completely plagiarised? Does it seem completely unreasonable? Once those checks are done and the article is online, there’s more time to do a thorough review with less pressure. This is where the real opportunity lies for journals, and indeed new organisations that want to do peer review differently, to say “OK, the paper is out there, we are now going to evaluate it. Can we evaluate it in a better way because we haven’t had to rush the evaluation as the dissemination has already been achieved”.

“Preprints have demonstrated that you can decouple research evaluation from research dissemination.”

“In the 10 years since bioRxiv launched, we’ve had many different fields embracing this process and people understanding that you have to read the paper yourself; you can’t just take its conclusions on trust. It’s concentrated people’s minds in that respect, because we can all point to papers that apparently underwent ‘peer review’ but we’re aghast that they somehow made it through. What’s interesting is that the existence of bioRxiv is allowing people to begin to experiment with peer review. You now have organisations like Review Commons and Peer Community In, which are not journals; they are peer review services that operate based on the fact that there is already a preprint out there on bioRxiv or medRxiv.

The other thing we’ve certainly found at medRxiv is that you have to do this responsibly. There’s a small number of papers where the findings might influence public behaviour and we say these should go through peer review before dissemination, but that’s not true of 99% of clinical papers. That’s part of medRxiv’s initial screening, the obvious example being a paper claiming a life-saving treatment or vaccine was dangerous and a consequence of its dissemination could be that a lot of people stop taking the treatment – that would be a problem and we wouldn’t post it. But most papers aren’t in that category, and in the clinical space, the pandemic showed that epidemiology could be disseminated as preprints with huge benefit. For example the RECOVERY Trial showing dexamethasone was an effective treatment for severe COVID came out as a preprint on medRxiv many weeks before it appeared in the New England Journal of Medicine.”

Thinking specifically about pharmaceutical industry-sponsored biomedical research, how have pharmaceutical companies embraced the use of preprint servers for disseminating their research findings? Speed of dissemination of preprints was a notable benefit during the COVID-19 pandemic. What are the other motivations for industry to use preprint servers for research dissemination?

“To the credit of the pharmaceutical industry, some of them are trying to figure out whether this is something they can or should do. We did get industry-supported papers showing the effectiveness of the COVID vaccines against different variants and that type of thing during the pandemic. So industry can and should make use of preprint servers. Part of the hesitation is this question of ‘safe harbour’ and what seems not quite resolved in everybody’s minds is whether pharmaceutical companies can put out these sorts of studies under safe harbour. The preclinical studies, the very basic research, I think they’re happy with, but some people in the pharmaceutical industry are worried that if they put out a paper that seems to show a clinical effect as a preprint, then they might be accused of trying to use the preprint server as a way to get around peer review and get out publicity claiming that a treatment works.

Speed of dissemination is the number one motivation for using a preprint server; another motivation is that you can revise preprints. So you can put out a preprint, get some comments, and improve it so that when you do send it to a journal, it’s in much better shape. A lot of people have observed that their papers have had easier rides through peer review journals because they’d ironed out some of the kinks after getting feedback on the preprint. There may also be some papers where you’re just getting some information out there, a follow-up work, for example, that doesn’t need formal peer review, and this will instead come in the community discussion that happens afterwards. I think that’s a debate among the scientific and clinical community as to what percentage of papers fall into that category.”

What are the primary challenges associated with the submission of industry-sponsored research to preprint servers. There can often be considerations relating to proprietary data, regulatory considerations, and potential for misinformation when thinking about disseminating clinical studies for instance. How can these challenges be addressed?

“This is why I think it’s important that preprint servers have screening to eliminate or minimise the possibility of misinformation. There is a difference between a responsibly operated server like medRxiv and some databases that don’t screen at all. It’s also why we have more stringency in our screening checks on medRxiv than bioRxiv, because of these kinds of concern.

One of the benefits of the preprint server is that it doesn’t claim to have verified the information. I’m far more concerned about misinformation that appears in journals where there is a claim that the information has been peer reviewed, so a journalist then comes across it and assumes it’s been peer reviewed so it must be right. I often joke that the papers that claimed that COVID came from 5G towers were in so called peer-reviewed journals, and not preprints. If that sort of thing came into medRxiv, we wouldn’t post it.”

“I’m more concerned about misinformation that appears in journals where there is a claim that the information has been peer reviewed.”

Preprint review is gaining traction as an approach to evaluating scientific research before formal journal publication, and you’ve mentioned the advantages of decoupling research evaluation from dissemination. How best do you think preprint review can complement traditional journal peer review?

“One obvious way is that a journal that’s doing traditional peer review can factor in the other evaluations that are going on. Review Commons is an interesting example in that you post a paper on bioRxiv, then you can go to Review Commons, who will do the peer review, and then you can take those peer reviews to a journal. There’s also the approach that one of the PLoS journals took, where they were actively looking at comments sections of preprints and taking the discussion into account in their peer review evaluation. I would certainly do that if I were an editor – if you’re getting two or three people’s peer reviews of a paper but there’s lots of discussion about that paper online that seems well-informed, then of course you’d want to factor that into your judgement. In the early days of Twitter, there were a lot of very good discussions of scientific papers – it’s become more polluted in recent years – and that demonstrated the potential for self-organised research evaluation. We shouldn’t lose sight of the fact that that’s what we really mean by peer review. Sometimes we think of peer review as a very formal process done over a period of weeks operated by a journal, but really in the scientific sense, peer review is the scientific community discussing and evaluating work and debating its significance. So it all comes back to this idea of decoupling of research evaluation from dissemination and asking how can we do the evaluation better.”

“We shouldn’t lose sight of the fact that that’s what we really mean by peer review… …the scientific community discussing and evaluating work and debating its significance.”

Thinking about a decoupled approach to research evaluation, what do you think about a model whereby the medical societies commission their own peer reviews instead of the traditional journal peer review approach?

“One of the questions I would ask if you were a scientific or medical society considering creating a new journal tomorrow and you knew that all the papers were going to be on bioRxiv or medRxiv, is what’s the point in hosting the papers on a website if they’re already on a preprint server? You can just do the review part. This gets back to a phrase that some people have used to describe the future: Publish, Review, Curate. Scientific societies would be a perfectly positioned to do that – they have the expertise, and they are seen as working in the interests of the scientific community. The challenge as with so much of publishing is the business model and who pays, but that’s a challenge the entire industry is facing. At least the decoupling means that you don’t have to pay for the hosting and putting the papers online because that’s already been done.”

We recently featured a piece on eLife’s ‘reviewed preprint’ model and the journal’s experience from the first year, with faster research dissemination without a reduction in quality. Do you see eLife’s model as a blueprint for the future of biomedical publishing?

“The interesting thing about the new eLife model is that it confronts this issue of peer review being a seal of approval. The worry has always been that you send your paper to, say, the New England Journal of Medicine, they don’t think it’s good enough to publish, and so you just go down the chain until ultimately your paper gets published somewhere – it gets a ‘tick’ saying it’s peer reviewed. Does that mean it’s correct or good enough to publish? Clearly the journals higher up the chain didn’t think so. What the eLife model does is explicitly say peer review is a process, not a judgement. You go through eLife peer review, you get peer reviews, and those peer reviews might say the evidence basis is not sufficient for the claims made. In other words, what they mean by ‘peer reviewed’ is that there are peer reviews for this paper, not that they have decided to give the paper a tick or endorsement. It’s a very interesting – and polarising – idea, because it makes people consider the difference between peer review as a process and peer review as a certification. Again this comes back to the view that peer review doesn’t need to be the same for all papers. I could see large swathes of basic science operating like this and clearly some of the funders seem to be thinking along these lines. I find it harder to see it working for clinical research, because there I think people do feel like they want some kind of judgement as to the veracity of the work. So I’d be less likely to predict success of the eLife model in the clinical space. It probably only works if you ensure the Curate part of the Publish, Review, Curate model – there’s too much for people to read and they want a signal as to whether they should read something.”

“What the eLife model does is explicitly say peer review is a process, not a judgement.”

It’s inevitable with innovative approaches like preprints and preprint peer review that people can have some misconceptions and scepticism. Are there any misconceptions you would like to dispel?

“The notion that preprints and preprint servers are all incredibly irresponsible and it leads to all this misinformation coming out – that’s not true. That’s why we have screening and these ‘do no harm’ rules. When I look back at the pandemic as an example of this, I don’t see any big errors that were made by bioRxiv and medRxiv. I do see a lot of errors that were made at journals – the Surgisphere papers for example or papers that said COVID came from outer space. These sort of things were not coming out on bioRxiv and medRxiv. The infamous paper by Didier Raoult on hydroxychloroquine did appear as a medRxiv preprint, but within 24 hours of that it appeared in a journal as well, and that was the thing that everybody was pointing to. I wouldn’t want to blame any physicians, but in the fog of war, anecdotal reports of hydroxychloroquine, etc. meant there was a problem with misinformation there, but I don’t think we should point the finger at preprints for it.”

“The notion that preprints and preprint servers lead to misinformation coming out is not true.”

What other innovative approaches should we be considering to evolve the peer review process?

“I think you could have a number of different stages of review – so decoupling things even further and saying, for example, the person who looks at the statistics in a paper need not be the same person who looks at the biology. So we might get to a point where we can say somebody’s checked a dataset, somebody’s looked at the crystal structure, somebody’s looked at the stats, etc. – and peer review evolves to be more of a constellation of trust signals in which individual elements of the paper have been verified. This could be particularly important for multidisciplinary studies where it’s conceivable that no one person could read and understand the whole paper. More generally, we should acknowledge we have been far too dependent on papers as the indicator of somebody’s scientific contribution. There are people who write code, people who create databases and data resources, for example, and we should understand that the peer-reviewed paper is part of a broader constellation of academic outputs, some of which may never produce ‘papers’.

We could also consider the idea of separating out the technical checks of a manuscript from a contextual review, and maybe those things can be carried out by different people. That way we could involve more people in the peer review process. It’s frequently noted that the peer review process is buckling and straining and there aren’t enough peer reviewers, but there are lots of younger scientists who want to peer review papers, and maybe they can do some of the technical review and maybe the more experienced heads do some more contextual review.”

Can artificial intelligence (AI) help in the peer review process, or might it cause more problems?

“The short answer is both. It’s very clear that AI can help; we all use spelling and grammar checks, and particularly for non-native English speakers, the use of large language models to help improve their English seems like a no-brainer. There are lots of useful time-saving tools, but from the author’s perspective, you can’t take any of their outputs on trust. We’re happy to have ChatGPT help write your paper, but you should read what it’s written and make sure that you agree with it, because ultimately you as the author are responsible for the content. On the flip side, undoubtedly AI will be used by bad actors to try and fake stuff, and I think a lot of publishers are talking about the notion of an arms race between the papermills and the publishers as people try to identify content that is entirely automated and fake as opposed to things that have undergone language polishing or used a tool that helps you process your data.”

Reflecting on the journey of bioRxiv and medRxiv, what have been the most surprising or significant lessons learned about the role of preprints in scientific publishing?

“I don’t know if it was a surprise, but one thing that was very striking was the rapid adoption of medRxiv during the pandemic. There’s that saying “If you build it, they will come”, which I’m always very dismissive of because I see so many examples where people built things they thought were great and nobody came. But one of the lessons was that scientists do adopt things when they see clear benefits for themselves and the community. They were very quick to adopt email, for example, but less quick to adopt electronic notebooks. The experience with bioRxiv was that once people figured out what it was doing, a lot of them became converts because they saw it as a huge benefit to themselves as individuals, and also the community. We anticipated that medRxiv would have a slow adoption phase over five years or so before anybody really used it; then came the pandemic. We launched medRxiv in 2019 and we certainly hadn’t told anyone in China about it, but by Spring of 2020 when the pandemic started, we were getting dozens of papers every day from China. So it was amazing to see this brand new thing that didn’t exist even a year before the pandemic, suddenly have 10 million people looking at it every month.”

“It was amazing to see this brand new thing that didn’t exist even a year before the pandemic, suddenly have 10 million people looking at it every month.”

Finally, what is your vision for the future of peer review in medical publishing? It’s been just over ten years since the founding of bioRxiv. How do you see the landscape evolving over the next decade?

“What I would really hope – and we’re beginning to see signs of this – is that the funders of research see that preprints are a really easy way to address a problem that they’ve been trying to solve for 20 years: how to provide public access to research. We’ve talked about peer review and its complexity, but the challenge of public access is one that we can solve really easily by funders just saying, “Post a preprint”. That could solve the problem tomorrow. Some funders are getting close to that, like the Chan Zuckerberg Initiative, and the Michael J. Fox Foundation, and actually the Bill & Melinda Gates Foundation are now taking this kind of approach. So that would be my number one hope: that this solves the access problem.

“Preprints are a really easy way to address the problem of providing public access to research.”

The other thing I’d love to see a lot more of is experiments in peer review – both by journals and self-organised communities. There’s a real opportunity for everyone involved to decide how can we do peer review better. Decoupling will also hopefully get us away from conflation of questions like Should I read this paper? Is this person good? Is this work of general interest? This is currently all conflated in assumptions based on the journal where the paper appears, but you can have great work that’s not in the top journals and things that are really important aren’t necessarily of broad general interest. A post-preprint ecosystem is an opportunity to try and get away from the conflation.”

Richard Sever is Assistant Director of Cold Spring Harbor Laboratory Press, and the co-founder of bioRxiv and medRxiv and can be contacted via LinkedIn.

—————————————————–

How do you perceive the current state of the peer review system in biomedical research?

]]>
https://thepublicationplan.com/2024/09/10/the-evolution-of-evaluation-richard-sever-on-the-future-of-peer-review/feed/ 0 16405
Can open access be made more equitable? https://thepublicationplan.com/2024/07/26/can-open-access-be-made-more-equitable/ https://thepublicationplan.com/2024/07/26/can-open-access-be-made-more-equitable/#respond Fri, 26 Jul 2024 08:38:27 +0000 https://thepublicationplan.com/?p=16201

KEY TAKEAWAYS

  • Although open access initiatives have been on the increase in low-income countries, global disparities persist in terms of who benefits the most from open access publication.
  • As one major funder moves to mandatory preprints, could this help redress the balance in terms of research dissemination and citation?

With increasing numbers of open access initiatives established worldwide, why are the benefits of open access not felt by all researchers equally? Recently, Holly Else reported for Nature on why, even though paywalls are falling, researchers from low-income countries are still struggling to be visible in the academic space.

Imbalances

Else and contributor Susan Murray (Executive Director, African Journals OnLine) reflected on the fact that many low-income countries have long established open access publication networks, and that these networks continue to grow. For example, Indonesia now has over 80% of its research activity freely available due to an increase in open access publishing platforms. Despite this, researchers in low-income countries tend to be subject to imbalances of power and resources to a greater degree than those in higher-income countries, which can prevent them from benefiting fully from these systems.

Inequities

Other inequities exacerbate the problem. As previously reported by The Publication Plan, a recent study by Dr Chun-Kai Huang and colleagues showed that the advantages of open access publication, such as increased and more diverse citations, are not felt evenly by researchers across the globe. In this large study of 420 million citations over 10 years, researchers from Northern Europe benefited the most from their work being published open access.

Innovations

Else highlights that, while open access is integral to ensuring the visibility of research, speed of publication is also key. Others have reported on the power of preprints to confer a citation advantage. An interesting development in this area is the move by the Bill & Melinda Gates foundation to move away from funding gold open access and instead require grant recipients to post their work on public preprint servers. It remains to be seen if such changes can help redress the balance in terms of who benefits from open access.

————————————————–

What do you think – would mandatory preprints help to make open access publishing more equitable for researchers worldwide?

]]>
https://thepublicationplan.com/2024/07/26/can-open-access-be-made-more-equitable/feed/ 0 16201
eLife’s ‘reviewed preprint’ model: results from the first year https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/ https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/#respond Tue, 02 Jul 2024 15:09:53 +0000 https://thepublicationplan.com/?p=16156

KEY TAKEAWAYS

  • A year after the launch of their ‘reviewed preprint’ model, the journal eLife has released their key findings.
  • eLife report over 6,200 submissions, 2.5× faster time to publication, and no significant change in quality.

In January 2023, eLife made the radical decision to end the process of accepting or rejecting papers after peer review, in favour of publishing ‘reviewed preprints’. A year on, they have released their key findings.

What is the ‘reviewed preprint’ model?

In this model, all articles selected for peer review are published on the eLife website as a reviewed preprint alongside an eLife assessment, public reviews, and a response from the authors (if provided).

What are the key results?

In the first year, eLife report:

  • over 6,200 submissions received and more than 1,300 reviewed preprints published
  • over 2.5× faster time from submission to publication than the legacy model
  • no significant change in the quality of submissions (based on ratings for significance and strength of evidence)
  • quality of eLife assessments and public reviews rated highly by authors.

When the new model was launched, eLife reported that views across academic publishing were mixed, with concerns that:

  • authors would not submit their work
  • editors and reviewers would not want to be involved
  • articles would be of low quality or only from researchers with the most confidence in their work.

However, a year on, eLife consider the reality to be much more encouraging, highlighting how:

  • editors and reviewers have been able to focus on summarising the strengths and weaknesses of an article, with their views open for debate
  • authors and reviewers have been able to provide exchange without fear of articles being rejected
  • the majority of authors have revised their articles in response to reviewer comments, resulting in what eLife believe to be ‘better science all around’.

The majority of authors have revised their articles in response to reviewer comments, resulting in what eLife believe to be ‘better science all around’.

What’s next?

Going forward, eLife commit to continued evolution and adaptation. One proposal is to extend this approach to articles that may not typically be published by broad-interest journals, such as important negative or preliminary findings.

eLife welcome ideas to help them achieve these aims. They also encourage other publishers to adopt some aspects of their approach by making their software infrastructure freely available.

————————————————–

Would you be more likely to submit to eLife based on these results?

]]>
https://thepublicationplan.com/2024/07/02/elifes-reviewed-preprint-model-results-from-the-first-year/feed/ 0 16156
Finding the way forward for peer review https://thepublicationplan.com/2023/03/30/finding-the-way-forward-for-peer-review/ https://thepublicationplan.com/2023/03/30/finding-the-way-forward-for-peer-review/#comments Thu, 30 Mar 2023 13:04:35 +0000 https://thepublicationplan.com/?p=13509

KEY TAKEAWAYS

  • The systems for finding, training, and incentivising peer reviewers may need to change to meet current demand.

Peer review has developed as a means of establishing quality control in research, but can current processes keep up with rapidly increasing research volumes? In a recent Nature Career Feature article, Amber Dance reported on the difficulties and ideas for overhauling the system, drawing on the experiences of a range of stakeholders in the peer review process.

Several issues with current peer review processes were raised:

  • It takes time. Aczel et al estimated that in 2020, reviewers worldwide spent over 130 million hours (nearly 15,000 years) reviewing articles.
  • It is often unpaid work. While this might reduce the risk of bias, it makes peer reviewing unfeasible for some.
  • Reviewers are becoming more selective about the work they are willing to take on. Some now only peer review for not-for-profit journals or preprints, where they focus on the science rather than suitability for a given journal.
  • There is underrepresentation of junior researchers and those from countries with less well-established research infrastructure.
  • It can be a slow process, sometimes resulting in delays to publication and the ability for research to shape policy, for example. In some cases, processes may even drive researchers to leave academia altogether.

Reviewers are becoming more selective about the work they are willing to take on. Some now only peer review for not-for-profit journals or preprints.

Dance explored opinions on how peer review could change, such as:

  • Incentives for researchers’ time. This might vary from a free journal subscription to the more controversial issue of journals paying for reviews. Other incentives might include giving more recognition to named peer reviewers.
  • Peer review training for early-career researchers and those in lower-income countries, to increase the pool and diversity of potential reviewers.
  • Increasing the use of technology to check aspects of statistics or methods, for example.
  • Reducing the number of reviews needed through increased screening of submissions prior to peer review, allowing authors to ‘recycle’ reviews for a related journal submission, or enabling submission of reviews collected before an initial submission (such as those from eLife reviewed preprints).

Drawing on these perspectives, many changes could be made to peer review – we look forward to seeing how processes may evolve in future.

—————————————————–

What would you most like to see change with peer review?

]]>
https://thepublicationplan.com/2023/03/30/finding-the-way-forward-for-peer-review/feed/ 1 13509
eLife shifts to publishing ‘reviewed preprints’, ending accept/reject decisions https://thepublicationplan.com/2023/03/14/elife-shifts-to-publishing-reviewed-preprints-ending-accept-reject-decisions/ https://thepublicationplan.com/2023/03/14/elife-shifts-to-publishing-reviewed-preprints-ending-accept-reject-decisions/#respond Tue, 14 Mar 2023 09:39:35 +0000 https://thepublicationplan.com/?p=13396

KEY TAKEAWAYS

  • The journal eLife has abolished accept/reject decisions and established a new model for research dissemination.
  • All papers assessed by the journal will now be published as reviewed preprints, featuring open peer reviews and a further, standardised assessment.

Following eLife’s 2021 policy to make preprinting a mandatory requirement ahead of submission, the journal has recently stopped accepting or rejecting papers after peer review. Instead, eLife will publish ‘reviewed preprints’, which they hope will bring together the diligence of peer review and the fast pace of preprint posting.

The reviewed preprint format will see all papers that are accepted for review posted online. They will appear alongside open peer reviews and an eLife assessment, which summarises editor and peer reviewer opinions on the paper’s impact and strength of evidence. These assessments are tailored to non-experts and use standardised language, akin to a grading scale. Authors also have the opportunity to include a response to the reviewer comments and assessment. After posting, authors can choose to take no further action, submit a revised version of the paper, or deem the article the final version of record to be indexed on PubMed.

eLife have shared various benefits that they believe this approach will offer:

  • increased transparency around author–reviewer communication
  • improved communication of intricate reviewer thought processes
  • greater accessibility for non-expert readers
  • greater autonomy for authors
  • increased emphasis on paper content, as opposed to publisher title
  • faster mobilisation of scientific content.

Editor-in-Chief Michael Eisen has described the major shift in approach as “relinquishing the traditional journal role of gatekeeper” in favour of a system tailored to scientists rather than publishers.

eLife editors expressed their hopes that this transparent review model will encourage readers to judge a paper by its content as opposed to the journal it is published in.

In a recent editorial, eLife editors expressed their hopes that this transparent review model will encourage readers to judge a paper by its content as opposed to the journal it is published in, and that it will “become the norm across science”.

—————————————————–

Do you think more publishers should employ ‘reviewed preprints’ going forward?

]]>
https://thepublicationplan.com/2023/03/14/elife-shifts-to-publishing-reviewed-preprints-ending-accept-reject-decisions/feed/ 0 13396
How common is data sharing for COVID-19 preprints? https://thepublicationplan.com/2022/10/06/how-common-is-data-sharing-for-covid-19-preprints/ https://thepublicationplan.com/2022/10/06/how-common-is-data-sharing-for-covid-19-preprints/#respond Thu, 06 Oct 2022 13:35:48 +0000 https://thepublicationplan.com/?p=12347

KEY TAKEAWAYS

  • A study of COVID-19 preprints found that only a quarter included a data sharing statement, and just 15% shared raw data.
  • The authors call for preprint servers to introduce compulsory data sharing statements and for better education of researchers on data sharing.

The use of preprints has skyrocketed since the start of the COVID-19 pandemic, with this publication type included in guidance from the International Committee of Medical Journal Editors updated earlier this year. Publishing via preprints should allow other scientists to scrutinise key findings at the earliest opportunity; however, a recent study found that many authors using the popular medRxiv and bioRxiv preprint servers do not share their raw data.

Prof Livia Puljak and her team analysed data sharing practices in COVID-19 preprints published in early 2020. Of 699 articles, a mere 26% included a data sharing statement. Raw data were only accessible for 15% of all articles, and were unobtainable even for half of the preprints that reported data as being accessible.

Raw data were only accessible for 15% of all articles, and were unobtainable even for half of the preprints that reported data as being accessible.

It seems that problems with open and transparent data sharing aren’t limited to COVID-19 research or to preprints – the same team also reported low rates of data sharing in a larger study of peer reviewed biomedical and health science articles.

Sharing data is crucial to enable replication of key results and effective design of further studies, and may be especially important for ‘work in progress’ preprints, for which conclusions may change over time. As such, the authors call for compulsory data sharing statements in all preprints and “education of researchers about the meaning of data sharing.”

—————————————————–

What do you think - should preprint authors share all their raw data at the time of preprint publication?

]]>
https://thepublicationplan.com/2022/10/06/how-common-is-data-sharing-for-covid-19-preprints/feed/ 0 12347
How to encourage constructive public feedback on preprints? https://thepublicationplan.com/2022/08/09/how-to-encourage-constructive-public-feedback-on-preprints/ https://thepublicationplan.com/2022/08/09/how-to-encourage-constructive-public-feedback-on-preprints/#respond Tue, 09 Aug 2022 11:12:08 +0000 https://thepublicationplan.com/?p=11911

KEY TAKEAWAYS

  • As preprint publications increase in popularity, mechanisms to encourage transparent public review are needed.
  • The FAST principles provide a framework for use by authors, reviewers, and the wider community to foster engagement in preprint discussions.

The publication of preprint articles has gathered pace in recent years, accelerating rapidly during the COVID-19 pandemic. A key advantage of preprints is that they can be scrutinised by a diverse audience ahead of submission to traditional scholarly journals. Despite the benefits of public feedback, just 5–10% of preprint articles on bioRxiv and medRxiv receive publicly accessible comments, with many reviewers preferring to provide feedback privately.

To help facilitate rapid and constructive preprint feedback, ASAPbio established a Working Group to develop best-practice guidance for public commentary and engagement with open preprint discussions. In their guest post on The Scholarly Kitchen, Sandra Franco Iborra, Jessica Polka, and Iratxe Puebla of ASAPbio summarised the FAST principles for preprint feedback that were developed.

The 14 FAST principles are grouped into 4 central themes:

  • Focussed: comments and feedback should focus on the scientific content and not the suitability for potential target journals.
  • Appropriate: reviewers should reflect on their potential biases and engage in scientific discourse respectfully and with integrity.
  • Specific: feedback should be candid, assess a study’s claims against the data presented, and be clear on whether issues identified are major or minor.
  • Transparent: reviews should be as open and transparent as possible and credit any co-reviewers. Those not comfortable signing their review can disclose their background or expertise alongside their comments.

The authors highlight that the unique features of the FAST principles mean that they are relevant to all involved in feedback, including journals, authors, and the wider community. Importantly, they are not intended to replace the reviewer guidance already provided by traditional scholarly journals, but rather complement it, to facilitate communication between authors and peer-reviewers and help promote positive behaviours for peer-reviewers.

The authors hope that the FAST principles will contribute to a broader conversation on the review process, helping produce a more positive and diverse culture.

The authors propose that encouraging public review of preprint articles could help journals expand and diversify their reviewer pool by identifying junior researchers and those located across broader geographical regions.

There has already been a move by some journals to incorporate preprint reviews into their editorial processes. Both Review Commons and Peer Community In Registered Reports (PCI RR) provide journal-independent preprint review, which is accepted by several affiliated journals. The FAST principles could be used to support this process, defining the expectations for preprint reviews that will ultimately be acceptable to scholarly journals.

The authors hope that the FAST principles will contribute to a broader conversation on the review process, helping produce a more positive and diverse culture.

—————————————————–

Will the FAST principles encourage you to engage in public discussion of preprint articles?

]]>
https://thepublicationplan.com/2022/08/09/how-to-encourage-constructive-public-feedback-on-preprints/feed/ 0 11911
An interview with Valérie Philippon: beyond clinical trial data https://thepublicationplan.com/2022/05/16/an-interview-with-valerie-philippon-beyond-clinical-trial-data/ https://thepublicationplan.com/2022/05/16/an-interview-with-valerie-philippon-beyond-clinical-trial-data/#respond Mon, 16 May 2022 13:10:39 +0000 https://thepublicationplan.com/?p=11310

While randomised clinical trials are often considered the gold standard of medical research, data generated from other settings can provide highly valuable findings and should be considered when developing a publication plan. Following her presentation at the 2021 International Society for Medical Publication Professionals (ISMPP) West meeting, The Publication Plan talked to Valérie Philippon, Head of Global Publications at Takeda, to find out more about the benefits of including non-clinical trial research in publication strategies.

At the ISMPP West meeting in October 2021, we heard how data from outside clinical trial settings – such as observational research and health economic analyses – can add value to a publication strategy. What gaps can these data fill and what are the benefits of incorporating them into publication strategies?

“To develop an impactful and complete publication plan, one has to consider all types of data: not only the clinical data but also preclinical, heath economics and outcomes, and real-word evidence. Doing an in-depth gap analysis of a particular disease area is a really important first step in developing a publication plan, followed by identifying all relevant audiences and mapping out data availability. Clinical data are of course designed to answer key questions about a particular intervention, but not all medical questions can be answered by randomised clinical trials, since not all populations can be studied at once. Observational studies can shed light on many disease parameters and outcomes research can help understand how patients experience health care interventions. These findings should be incorporated into publication plans.”

To develop an impactful and complete publication plan, one has to consider all types of data.”

In your experience, what are the most important things to understand about these types of datasets? How do they differ from clinical trial data?

“Understanding the differences between controlled trials and observational studies is key to interpreting data and comprehending the limitations of the studies. In clinical trials, individuals are assigned to receive one or more interventions (or no intervention) so that the effects of the interventions on biomedical or health-related outcomes can be evaluated; whereas observational study participants may receive diagnostic, therapeutic, or other types of interventions, but the investigator does not assign participants to specific interventions. One limitation of observational studies is selection bias, as individuals in a database or study are not randomly assigned to an active or control group, but generally have already been diagnosed with the disease being studied and/or are receiving a specific treatment, making any attempt to control variables difficult. However, such studies can be very large and span multiple years leading to rich datasets. The two types of study are complementary and should both be included in publication planning.”

Are there any best practice tips you could share on how to incorporate value communications into a publication plan?

A coherent value communication, scientifically driven and relevant to each audience, should be developed early and updated frequently.”

“A coherent value communication, scientifically driven and relevant to each audience, should be developed early and updated frequently as more data are generated. Understanding the current landscape, identifying the needs of each audience and mapping out data availability are all key components to take into account while building a publication plan incorporating strategic data release.”

We also heard how the target audiences for value communications may include decision makers, payers, and patients, in addition to healthcare practitioners (HCPs). How do the needs of the end-users differ and how should that impact the publication planning approach?

“A publication plan must target all relevant audiences, in terms of type of communication, venue and timing. HCPs need safety and efficacy information in order to inform their diagnostic and treatment decisions, while payers need economic and outcome data to help them with reimbursement decisions. Understanding the payers’ needs is critical for a company to ensure that key information is communicated early enough for its innovation(s) to translate into patient value and access.”

There’s growing recognition of the importance of making medical publications accessible to a wider audience through instruments such as plain language summaries (PLS). What are your perspectives on how the industry can innovate and adopt policies to achieve this goal? What should we as publication professionals be doing to communicate more effectively with a non-HCP audience?

“Scientific literature is by definition technical and difficult to understand for non-specialists and more can be done to make scientific and medical research more accessible and inclusive. Plain English and lay summaries of peer reviewed medical journal publications are intended for everyone engaging with medical research, such as all HCPs (MD and non-MD, specialists and general practitioners), patients, patient advocates, caregivers, and the general public. More and more PLS are developed alongside medical publications and indexed in PubMed, but we can still do better at consistently developing PLS, and agree as an industry on their format (eg, length, inclusion of infographics), the process involved in writing PLS and how to make PLS discoverable.”

More can be done to make scientific and medical research more accessible and inclusive.”

At the ISMPP EU 2022 meeting you co-authored a study on the accessibility of pharmaceutical industry-sponsored research. Do you think universal open access for pharma-sponsored research is an achievable goal? What’s the current approach to open access at Takeda?

“As a commitment to transparency and to open science, Takeda requires the submission of all Takeda-supported research manuscripts to journals that offer public availability via open access, allowing the public to obtain free, unrestricted online access to Takeda’s research promptly following publication. On a personal level, I believe that universal open access for pharma-sponsored research is a very attainable goal that all pharmaceutical companies should prioritise.”

I believe that universal open access for pharma-sponsored research is a very attainable goal that all pharmaceutical companies should prioritise.”

You have also been involved in research on the utilisation of preprints by the pharmaceutical industry, which to date has been relatively limited. Do you envisage greater use of preprint servers by the industry in the future? Are there any fundamental barriers preventing the pharma industry from using preprints?

“As we demonstrated in our ISMPP posters (Current trends in pharmaceutical industry-affiliated preprints, Wieting et al, presented at the 2021 ISMPP European Meeting and Current trends in pharmaceutical industry-affiliated medRxiv and bioRxiv preprints, Wieting et al, presented at the 2022 ISMPP European Meeting), only a small proportion of preprints are affiliated with pharmaceutical companies and even fewer have a pharmaceutical employee as first or corresponding author. There is a reluctance to share clinical data in preprint form as some companies feel that preprints fall outside of the scientific exchange, so more guidance from regulatory bodies would be helpful.”

“Some companies feel that preprints fall outside of the scientific exchange, so more guidance from regulatory bodies would be helpful.”

Valérie Philippon is Head of Global Publications, Takeda, and can be contacted via valerie.philippon@takeda.com or LinkedIn.

—————————————————–

 

In your experience, how often do scientific publication plans include data from research beyond clinical trials?

]]>
https://thepublicationplan.com/2022/05/16/an-interview-with-valerie-philippon-beyond-clinical-trial-data/feed/ 0 11310