Post-publication peer review – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Thu, 10 Apr 2025 07:20:57 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Post-publication peer review – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 eLife’s peer review approach leads to loss of impact factor https://thepublicationplan.com/2025/04/10/elifes-peer-review-approach-leads-to-loss-of-impact-factor/ https://thepublicationplan.com/2025/04/10/elifes-peer-review-approach-leads-to-loss-of-impact-factor/#respond Thu, 10 Apr 2025 06:55:39 +0000 https://thepublicationplan.com/?p=17540

KEY TAKEAWAYS

  • eLife adopted a ‘reviewed preprint’ publishing model in 2023, publishing all reviewed papers, regardless of reviewer recommendation.
  • Clarivate has since updated its policies to no longer provide impact factors for journals that publish papers that are not endorsed by peer review.

As reported in Research Professional News, the non-profit research journal eLife will not receive an impact factor rating from Web of Science in 2025, following implementation of a new policy by Web of Science provider Clarivate. Under eLife’s reviewed preprint’ model adopted in 2023, all submitted research papers that undergo peer review are published, regardless of whether reviewers recommended them for publication. In response to the growing trend of journals decoupling publication from peer review, Clarivate introduced its policy to index only content that is validated by peer review. ​

“[Clarivate’s policy] reflects our commitment to support the integrity of the scholarly record through curation and selectivity in the Web of Science.” – Nandita Quaderi, editor-in-chief, Web of Science

eLife, a signatory of the Declaration on Research Assessment, opposes the reliance on metrics like the impact factor and has reiterated its commitment to meaningful research assessment, stating that its model is closer to the ideal of how scientific discourse should work.

This development will inevitably spark discussion about the pros and cons of traditional metrics in research assessment. eLife’s innovative model challenges the conventional take on peer review, prompting the scientific community to reconsider how best to measure research impact and quality. As the peer review and publishing landscape evolves, this case underscores the need for ongoing dialogue about practices that best serve the advancement of science.

————————————————–

Do you believe traditional metrics like the impact factor accurately reflect research quality?

]]>
https://thepublicationplan.com/2025/04/10/elifes-peer-review-approach-leads-to-loss-of-impact-factor/feed/ 0 17540
Uncovering scientific ERRORs: can financial rewards work? https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/ https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/#respond Thu, 31 Oct 2024 14:41:54 +0000 https://thepublicationplan.com/?p=16715

KEY TAKEAWAYS

  • The ERROR project pays reviewers to search for mistakes in the scientific literature, while rewarding authors who agree to participate.
  • Reviewers and authors receive bonuses depending on the extent of errors found.

Amid rising retraction rates, the scientific record is increasingly scrutinised for signs of research misconduct like fabrication and image manipulation. But what about detecting errors in the data underlying scientific publications?

The ERROR project

Modelled on tech company ‘bug bounty’ programmes, the Estimating the Reliability & Robustness of Research (ERROR) project offers cash rewards for reviewers identifying incorrect or misinterpreted data, code, statistical analyses, or citations in scientific papers. Following ERROR’s launch earlier this year, Julian Nowogrodzki reviewed the project so far in a recent article in Nature.

Professor Malte Elson and colleagues are aiming to produce a blueprint for systematic error detection that will be scalable and transferable across scientific fields. Starting with highly cited psychology papers, the first review was posted in August. ERROR plans to cover 100 publications over 4 years, expanding into artificial intelligence, medical research, and potentially preprints.

“The ERROR project offers cash rewards for reviewers identifying incorrect or misinterpreted data, code, statistical analyses, or citations in scientific papers.”

Financial incentives

The project has 250,000 Swiss francs (~£220,000) of funding from Professor Elson’s institution, the University of Bern. Reviewers can earn up to 1,000 Swiss francs each time, plus a variable bonus of up to 2,500 Swiss francs depending on the scale of errors identified. Authors receive up to 500 Swiss francs: 250 for agreeing to participate and sharing data, plus a bonus if minimal errors are found.

A challenging path

Despite the incentives, ERROR has hurdles to overcome:

  • Author buy-in: So far, authors from just 17 of 134 selected papers have agreed to participate.
  • Data access: Underlying data may have been lost or authors may cite legal reasons barring sharing.
  • Reviewer expertise: There are limited potential reviewers with sufficient technical expertise yet no conflicts of interest. Dynamics linked to seniority may also prevent some prospective reviewers taking part.

The ERROR team hopes to convince research funders to allocate money for error detection – ultimately saving them from investing in flawed research. We look forward to seeing how this project helps move the needle towards a more reproducible scientific record.

————————————————–

Do you think current ‘ad hoc’ approaches to error detection in the scientific record are sufficient?

]]>
https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/feed/ 0 16715
The evolution of evaluation: Richard Sever on the future of peer review https://thepublicationplan.com/2024/09/10/the-evolution-of-evaluation-richard-sever-on-the-future-of-peer-review/ https://thepublicationplan.com/2024/09/10/the-evolution-of-evaluation-richard-sever-on-the-future-of-peer-review/#respond Tue, 10 Sep 2024 12:28:25 +0000 https://thepublicationplan.com/?p=16405

Peer review is fundamental to the evaluation of biomedical research, ensuring the rigour and credibility of published scientific findings. However, the system is under mounting pressure due to the sheer volume of research being conducted, and the quality and timeliness of research evaluation is increasingly at stake. Richard Sever, co-founder of the bioRxiv and medRxiv preprint servers, is at the forefront of efforts to innovate in this space. We spoke with Richard to discuss his vision for the future of peer review, exploring how preprints and evolving evaluation methods might address the challenges facing scientific publishing today.

You recently participated in a session on the future of peer review at the ISMPP Annual Meeting. Do you believe that the existing peer review model effectively meets the needs of the scientific community, particularly in biomedical and clinical research? If there is room for improvement, what are the main deficiencies of the current system and what can be done to address them?

“I do think there’s room for improvement. When we say peer review, often what we mean is a broader picture that includes the editorial and administrative checks that a journal does, as well as the formal review by peers. That’s where things vary a lot – there are some journals that are incredibly responsible and do a very good job, and we know that there are some where it’s peer review in name only, most obviously the predatory journals. But there’s a spectrum, so there’s a lot of opportunity to improve the process. Part of that might be making different choices for different types of article. For example, for papers where there’s patient involvement, there needs to be far more stringent scrutiny than for a basic research paper. Patient consent for publication, deidentification of patient data, you can’t really expect peer reviewers to do those kinds of checks; you expect the journal to do them. In recent years, I’ve become more concerned about these editorial checks than peer review per se, because opinions will differ on the quality of manuscripts and its clearly not the case that the three people who peer review a paper are a representative sample of everybody who could review it; however, the integrity checks that a journal performs may ultimately be more important. Different journals cover different subjects though, so maybe they can approach things differently. A journal dealing with a high volume of basic research papers, for example, may not need to worry as much about certain checks. This is where we start considering the benefits of peer review, and in some cases, it may be better done after publication, leading to a more multidimensional, ongoing process. On the other hand, for a vaccine study, you may want a very thorough peer review before it goes out into the world, depending on the results.”

“…there’s a lot of opportunity to improve peer review. Part of that might be making different choices for different types of article.”

You co-founded the medRxiv preprint server for health science research in 2019. How and where do preprint servers fit into the existing peer review model? Has that positioning evolved in the years since medRxiv was launched?

“The clear thing about preprint servers is that they’re decoupling research dissemination from research evaluation and specifically from peer review evaluation. What has become very clear both in the basic science space and in the clinical space is that you can do this so long as you responsibly put out preprints and make it clear that these are authors’ claims and they have not been verified. This is a good thing, because it acclimatises people to the fact that science can be a bit messy and just because somebody has put something out there, it doesn’t mean it’s necessarily valid. Preprints have demonstrated that you can do this decoupling, which then allows us to have a conversation about what the evaluation should look like. There are checks you can do very quickly at a preprint server: Does this paper look like it’s completely plagiarised? Does it seem completely unreasonable? Once those checks are done and the article is online, there’s more time to do a thorough review with less pressure. This is where the real opportunity lies for journals, and indeed new organisations that want to do peer review differently, to say “OK, the paper is out there, we are now going to evaluate it. Can we evaluate it in a better way because we haven’t had to rush the evaluation as the dissemination has already been achieved”.

“Preprints have demonstrated that you can decouple research evaluation from research dissemination.”

“In the 10 years since bioRxiv launched, we’ve had many different fields embracing this process and people understanding that you have to read the paper yourself; you can’t just take its conclusions on trust. It’s concentrated people’s minds in that respect, because we can all point to papers that apparently underwent ‘peer review’ but we’re aghast that they somehow made it through. What’s interesting is that the existence of bioRxiv is allowing people to begin to experiment with peer review. You now have organisations like Review Commons and Peer Community In, which are not journals; they are peer review services that operate based on the fact that there is already a preprint out there on bioRxiv or medRxiv.

The other thing we’ve certainly found at medRxiv is that you have to do this responsibly. There’s a small number of papers where the findings might influence public behaviour and we say these should go through peer review before dissemination, but that’s not true of 99% of clinical papers. That’s part of medRxiv’s initial screening, the obvious example being a paper claiming a life-saving treatment or vaccine was dangerous and a consequence of its dissemination could be that a lot of people stop taking the treatment – that would be a problem and we wouldn’t post it. But most papers aren’t in that category, and in the clinical space, the pandemic showed that epidemiology could be disseminated as preprints with huge benefit. For example the RECOVERY Trial showing dexamethasone was an effective treatment for severe COVID came out as a preprint on medRxiv many weeks before it appeared in the New England Journal of Medicine.”

Thinking specifically about pharmaceutical industry-sponsored biomedical research, how have pharmaceutical companies embraced the use of preprint servers for disseminating their research findings? Speed of dissemination of preprints was a notable benefit during the COVID-19 pandemic. What are the other motivations for industry to use preprint servers for research dissemination?

“To the credit of the pharmaceutical industry, some of them are trying to figure out whether this is something they can or should do. We did get industry-supported papers showing the effectiveness of the COVID vaccines against different variants and that type of thing during the pandemic. So industry can and should make use of preprint servers. Part of the hesitation is this question of ‘safe harbour’ and what seems not quite resolved in everybody’s minds is whether pharmaceutical companies can put out these sorts of studies under safe harbour. The preclinical studies, the very basic research, I think they’re happy with, but some people in the pharmaceutical industry are worried that if they put out a paper that seems to show a clinical effect as a preprint, then they might be accused of trying to use the preprint server as a way to get around peer review and get out publicity claiming that a treatment works.

Speed of dissemination is the number one motivation for using a preprint server; another motivation is that you can revise preprints. So you can put out a preprint, get some comments, and improve it so that when you do send it to a journal, it’s in much better shape. A lot of people have observed that their papers have had easier rides through peer review journals because they’d ironed out some of the kinks after getting feedback on the preprint. There may also be some papers where you’re just getting some information out there, a follow-up work, for example, that doesn’t need formal peer review, and this will instead come in the community discussion that happens afterwards. I think that’s a debate among the scientific and clinical community as to what percentage of papers fall into that category.”

What are the primary challenges associated with the submission of industry-sponsored research to preprint servers. There can often be considerations relating to proprietary data, regulatory considerations, and potential for misinformation when thinking about disseminating clinical studies for instance. How can these challenges be addressed?

“This is why I think it’s important that preprint servers have screening to eliminate or minimise the possibility of misinformation. There is a difference between a responsibly operated server like medRxiv and some databases that don’t screen at all. It’s also why we have more stringency in our screening checks on medRxiv than bioRxiv, because of these kinds of concern.

One of the benefits of the preprint server is that it doesn’t claim to have verified the information. I’m far more concerned about misinformation that appears in journals where there is a claim that the information has been peer reviewed, so a journalist then comes across it and assumes it’s been peer reviewed so it must be right. I often joke that the papers that claimed that COVID came from 5G towers were in so called peer-reviewed journals, and not preprints. If that sort of thing came into medRxiv, we wouldn’t post it.”

“I’m more concerned about misinformation that appears in journals where there is a claim that the information has been peer reviewed.”

Preprint review is gaining traction as an approach to evaluating scientific research before formal journal publication, and you’ve mentioned the advantages of decoupling research evaluation from dissemination. How best do you think preprint review can complement traditional journal peer review?

“One obvious way is that a journal that’s doing traditional peer review can factor in the other evaluations that are going on. Review Commons is an interesting example in that you post a paper on bioRxiv, then you can go to Review Commons, who will do the peer review, and then you can take those peer reviews to a journal. There’s also the approach that one of the PLoS journals took, where they were actively looking at comments sections of preprints and taking the discussion into account in their peer review evaluation. I would certainly do that if I were an editor – if you’re getting two or three people’s peer reviews of a paper but there’s lots of discussion about that paper online that seems well-informed, then of course you’d want to factor that into your judgement. In the early days of Twitter, there were a lot of very good discussions of scientific papers – it’s become more polluted in recent years – and that demonstrated the potential for self-organised research evaluation. We shouldn’t lose sight of the fact that that’s what we really mean by peer review. Sometimes we think of peer review as a very formal process done over a period of weeks operated by a journal, but really in the scientific sense, peer review is the scientific community discussing and evaluating work and debating its significance. So it all comes back to this idea of decoupling of research evaluation from dissemination and asking how can we do the evaluation better.”

“We shouldn’t lose sight of the fact that that’s what we really mean by peer review… …the scientific community discussing and evaluating work and debating its significance.”

Thinking about a decoupled approach to research evaluation, what do you think about a model whereby the medical societies commission their own peer reviews instead of the traditional journal peer review approach?

“One of the questions I would ask if you were a scientific or medical society considering creating a new journal tomorrow and you knew that all the papers were going to be on bioRxiv or medRxiv, is what’s the point in hosting the papers on a website if they’re already on a preprint server? You can just do the review part. This gets back to a phrase that some people have used to describe the future: Publish, Review, Curate. Scientific societies would be a perfectly positioned to do that – they have the expertise, and they are seen as working in the interests of the scientific community. The challenge as with so much of publishing is the business model and who pays, but that’s a challenge the entire industry is facing. At least the decoupling means that you don’t have to pay for the hosting and putting the papers online because that’s already been done.”

We recently featured a piece on eLife’s ‘reviewed preprint’ model and the journal’s experience from the first year, with faster research dissemination without a reduction in quality. Do you see eLife’s model as a blueprint for the future of biomedical publishing?

“The interesting thing about the new eLife model is that it confronts this issue of peer review being a seal of approval. The worry has always been that you send your paper to, say, the New England Journal of Medicine, they don’t think it’s good enough to publish, and so you just go down the chain until ultimately your paper gets published somewhere – it gets a ‘tick’ saying it’s peer reviewed. Does that mean it’s correct or good enough to publish? Clearly the journals higher up the chain didn’t think so. What the eLife model does is explicitly say peer review is a process, not a judgement. You go through eLife peer review, you get peer reviews, and those peer reviews might say the evidence basis is not sufficient for the claims made. In other words, what they mean by ‘peer reviewed’ is that there are peer reviews for this paper, not that they have decided to give the paper a tick or endorsement. It’s a very interesting – and polarising – idea, because it makes people consider the difference between peer review as a process and peer review as a certification. Again this comes back to the view that peer review doesn’t need to be the same for all papers. I could see large swathes of basic science operating like this and clearly some of the funders seem to be thinking along these lines. I find it harder to see it working for clinical research, because there I think people do feel like they want some kind of judgement as to the veracity of the work. So I’d be less likely to predict success of the eLife model in the clinical space. It probably only works if you ensure the Curate part of the Publish, Review, Curate model – there’s too much for people to read and they want a signal as to whether they should read something.”

“What the eLife model does is explicitly say peer review is a process, not a judgement.”

It’s inevitable with innovative approaches like preprints and preprint peer review that people can have some misconceptions and scepticism. Are there any misconceptions you would like to dispel?

“The notion that preprints and preprint servers are all incredibly irresponsible and it leads to all this misinformation coming out – that’s not true. That’s why we have screening and these ‘do no harm’ rules. When I look back at the pandemic as an example of this, I don’t see any big errors that were made by bioRxiv and medRxiv. I do see a lot of errors that were made at journals – the Surgisphere papers for example or papers that said COVID came from outer space. These sort of things were not coming out on bioRxiv and medRxiv. The infamous paper by Didier Raoult on hydroxychloroquine did appear as a medRxiv preprint, but within 24 hours of that it appeared in a journal as well, and that was the thing that everybody was pointing to. I wouldn’t want to blame any physicians, but in the fog of war, anecdotal reports of hydroxychloroquine, etc. meant there was a problem with misinformation there, but I don’t think we should point the finger at preprints for it.”

“The notion that preprints and preprint servers lead to misinformation coming out is not true.”

What other innovative approaches should we be considering to evolve the peer review process?

“I think you could have a number of different stages of review – so decoupling things even further and saying, for example, the person who looks at the statistics in a paper need not be the same person who looks at the biology. So we might get to a point where we can say somebody’s checked a dataset, somebody’s looked at the crystal structure, somebody’s looked at the stats, etc. – and peer review evolves to be more of a constellation of trust signals in which individual elements of the paper have been verified. This could be particularly important for multidisciplinary studies where it’s conceivable that no one person could read and understand the whole paper. More generally, we should acknowledge we have been far too dependent on papers as the indicator of somebody’s scientific contribution. There are people who write code, people who create databases and data resources, for example, and we should understand that the peer-reviewed paper is part of a broader constellation of academic outputs, some of which may never produce ‘papers’.

We could also consider the idea of separating out the technical checks of a manuscript from a contextual review, and maybe those things can be carried out by different people. That way we could involve more people in the peer review process. It’s frequently noted that the peer review process is buckling and straining and there aren’t enough peer reviewers, but there are lots of younger scientists who want to peer review papers, and maybe they can do some of the technical review and maybe the more experienced heads do some more contextual review.”

Can artificial intelligence (AI) help in the peer review process, or might it cause more problems?

“The short answer is both. It’s very clear that AI can help; we all use spelling and grammar checks, and particularly for non-native English speakers, the use of large language models to help improve their English seems like a no-brainer. There are lots of useful time-saving tools, but from the author’s perspective, you can’t take any of their outputs on trust. We’re happy to have ChatGPT help write your paper, but you should read what it’s written and make sure that you agree with it, because ultimately you as the author are responsible for the content. On the flip side, undoubtedly AI will be used by bad actors to try and fake stuff, and I think a lot of publishers are talking about the notion of an arms race between the papermills and the publishers as people try to identify content that is entirely automated and fake as opposed to things that have undergone language polishing or used a tool that helps you process your data.”

Reflecting on the journey of bioRxiv and medRxiv, what have been the most surprising or significant lessons learned about the role of preprints in scientific publishing?

“I don’t know if it was a surprise, but one thing that was very striking was the rapid adoption of medRxiv during the pandemic. There’s that saying “If you build it, they will come”, which I’m always very dismissive of because I see so many examples where people built things they thought were great and nobody came. But one of the lessons was that scientists do adopt things when they see clear benefits for themselves and the community. They were very quick to adopt email, for example, but less quick to adopt electronic notebooks. The experience with bioRxiv was that once people figured out what it was doing, a lot of them became converts because they saw it as a huge benefit to themselves as individuals, and also the community. We anticipated that medRxiv would have a slow adoption phase over five years or so before anybody really used it; then came the pandemic. We launched medRxiv in 2019 and we certainly hadn’t told anyone in China about it, but by Spring of 2020 when the pandemic started, we were getting dozens of papers every day from China. So it was amazing to see this brand new thing that didn’t exist even a year before the pandemic, suddenly have 10 million people looking at it every month.”

“It was amazing to see this brand new thing that didn’t exist even a year before the pandemic, suddenly have 10 million people looking at it every month.”

Finally, what is your vision for the future of peer review in medical publishing? It’s been just over ten years since the founding of bioRxiv. How do you see the landscape evolving over the next decade?

“What I would really hope – and we’re beginning to see signs of this – is that the funders of research see that preprints are a really easy way to address a problem that they’ve been trying to solve for 20 years: how to provide public access to research. We’ve talked about peer review and its complexity, but the challenge of public access is one that we can solve really easily by funders just saying, “Post a preprint”. That could solve the problem tomorrow. Some funders are getting close to that, like the Chan Zuckerberg Initiative, and the Michael J. Fox Foundation, and actually the Bill & Melinda Gates Foundation are now taking this kind of approach. So that would be my number one hope: that this solves the access problem.

“Preprints are a really easy way to address the problem of providing public access to research.”

The other thing I’d love to see a lot more of is experiments in peer review – both by journals and self-organised communities. There’s a real opportunity for everyone involved to decide how can we do peer review better. Decoupling will also hopefully get us away from conflation of questions like Should I read this paper? Is this person good? Is this work of general interest? This is currently all conflated in assumptions based on the journal where the paper appears, but you can have great work that’s not in the top journals and things that are really important aren’t necessarily of broad general interest. A post-preprint ecosystem is an opportunity to try and get away from the conflation.”

Richard Sever is Assistant Director of Cold Spring Harbor Laboratory Press, and the co-founder of bioRxiv and medRxiv and can be contacted via LinkedIn.

—————————————————–

How do you perceive the current state of the peer review system in biomedical research?

]]>
https://thepublicationplan.com/2024/09/10/the-evolution-of-evaluation-richard-sever-on-the-future-of-peer-review/feed/ 0 16405
Peer review: a step on the road to assurance https://thepublicationplan.com/2019/01/03/peer-review-a-step-on-the-road-to-assurance/ https://thepublicationplan.com/2019/01/03/peer-review-a-step-on-the-road-to-assurance/#respond Thu, 03 Jan 2019 09:48:54 +0000 https://thepublicationplan.com/?p=5528 Team and leadership

Peer review can be a source of frustration among researchers, potentially delaying publication and or biasing editorial decisions. However, in a recent article published by The New York Times, Professor Aaron E. Carroll, an editor at JAMA Pediatrics, argues that, “Throwing out the system — which deems whether research is robust and worth being published — would do more harm than good.”

Professor Carroll takes a look at the system’s weaknesses and provides some ideas for how to improve peer review. He acknowledges that reviewers often receive no specific training and may be overworked. He also highlights that peer review is not always consistent, drawing attention to a study published in 1982, in which 8 out of 12 previously published papers were rejected when resubmitted to the same journals 18–32 months later. He suggests that innovative research may face challenges surviving peer review and discusses the various biases that can occur during the review process, including gender bias.

Professor Carroll lists formal training, payment and incentives for reviewers, blinded peer review and public judgement of preprints as just some ideas for improving the system. He goes on to explain how the submission process is handled at JAMA Pediatrics, including how the journal follows the subsequent outcomes for rejected papers, as a way of reviewing and checking their own processes. Ultimately, he argues that we need to change how peer review is regarded, with it (and publication) as “steps on the road to assurance, not a final stamp of approval”. Indeed, studies have noted the value of peer review and its importance in improving the aspects of medical research that readers rely on most heavily to evaluate published findings, including the discussion of study limitations, generalisations, and use of confidence intervals. The onus may then be on the research community to conduct continued appraisal of the literature through post-publication peer-review.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Louise Niven DPhil, CMPP from Aspire Scientific

——————————————————–

With thanks to our sponsors, Aspire Scientific Ltd and NetworkPharma Ltd


]]>
https://thepublicationplan.com/2019/01/03/peer-review-a-step-on-the-road-to-assurance/feed/ 0 5528
[VIDEO] Preprints and peer review – feedback from the 14th Annual Meeting of ISMPP https://thepublicationplan.com/2018/06/27/video-preprints-and-peer-review-feedback-from-the-14th-annual-meeting-of-ismpp/ https://thepublicationplan.com/2018/06/27/video-preprints-and-peer-review-feedback-from-the-14th-annual-meeting-of-ismpp/#respond Wed, 27 Jun 2018 10:43:38 +0000 https://thepublicationplan.com/?p=5159 Mary Yianni, Publisher at Taylor & Francis, reviews the different models of peer review and explains the emerging role of preprints in biomedical sciences, including feedback from the recent 14th Annual Meeting of ISMPP.

Recorded 13 June 2018 at a MedComms Networking event in Oxford. Produced by NetworkPharma.tv.

Mary’s presentation (PDF format) is available here.







 

]]>
https://thepublicationplan.com/2018/06/27/video-preprints-and-peer-review-feedback-from-the-14th-annual-meeting-of-ismpp/feed/ 0 5159
Meeting report: summary of the 14th Annual Meeting of ISMPP – Part 2 https://thepublicationplan.com/2018/05/30/meeting-report-summary-of-the-14th-annual-meeting-of-ismpp-part-2/ https://thepublicationplan.com/2018/05/30/meeting-report-summary-of-the-14th-annual-meeting-of-ismpp-part-2/#respond Wed, 30 May 2018 11:18:37 +0000 https://thepublicationplan.com/?p=5111 ISMPP annual 1

The 14th Annual Meeting of the International Society for Medical Publication Professionals (ISMPP) took place over three days (30 April–2 May) in National Harbor, MD, USA, and attracted a record-breaking number of visitors — over 600 attendees.

The theme of this year’s meeting, ‘From Publication to Practice: Advancing Science Through Effective Communication’, focused on the evolving role of the publication professional, the importance of effective communication, and emerging trends within the medical publications industry. These topics were explored through keynote addresses, lively panel discussions, and interactive roundtables. As in previous years, members were also given the opportunity to share their own research via oral and poster sessions. The meeting was attended by both highly experienced professionals and relative newcomers to the industry. The meeting also saw Chris Winchester (Oxford PharmaGenesis) begin his post as the new Chair of ISMPP Board of Trustees.

A summary of the second half of the meeting (Tuesday afternoon and Wednesday) is provided below for those who could not attend, and as a timely reminder of the highlights for those who did. Many of the presented slides and posters are available to ISMPP members here. A summary of the first half of the meeting can be found here.

Is pharmaceutical industry research posted as preprints?

Tuesday afternoon started with the member oral presentations, the first of which was delivered by Heather Lang (Oxford PharmaGenesis) and described her team’s research on preprints. Preprints have been the subject of much debate within the medical field in recent years and were a key topic at the meeting. Lang’s research addressed the following questions:

  • Is the pharmaceutical industry posting research as preprints and, if so, what kind of research is being posted?
  • Are comments on the research being posted and do authors respond to such comments?
  • Have preprints been fully published and in what timeframe?

Findings showed that:

  • Only around 1% of preprints on bioRxiv (an online platform for the distribution of preprints in life sciences), reported industry-authored research. Most of this was basic science; bioinformatics, genomics and genetics were the most common.
  • Two-thirds of preprints were not revised on the system once posted.
  • Two-thirds of the preprints from pharmaceutical companies (posted up until the end of 2016) had been published in full.

Lang acknowledged that the pharmaceutical industry could share work more rapidly using preprints than with traditional peer-reviewed publications, but stressed that education around preprints is needed. She concluded by suggesting that the industry evaluates the appropriateness of preprints for different types of research, runs pilots to explore preprint services, and develops and shares policies regarding the appropriate use of preprints.

Attention analysis simulation of scientific posters aligns authors’ intent with viewers’ focus

The second member oral presentation was given by Steve Palmisano (MedThink SciCom). This team’s research involved the use of machine learning algorithms to simulate audience visual attention via eye-tracking movements. The approach was used to evaluate the effectiveness of 16 scientific posters from a recent ISMPP annual meeting. Results showed that headlines should be kept short, especially when used on a coloured background. Graphic elements that lead the eye worked well, as did clear figures free from unnecessary elements. Use of callout boxes and displaying key results within figures were also found to be effective.

Examples of less effective posters included those that resulted in very long eye movements between areas of focus, those which drew the reader’s attention to the wrong area, and those which featured callout boxes with too much copy. Palmisano advised placing QR/VR codes strategically, when used, within the headline or conclusions box for example.

Palmisano summarised that an effective poster should draw the reader’s attention to the most important elements, enable an efficient scan path pattern that is appropriate for learning, and be visually attractive.

Publishers panel: simplifying and streamlining – working together to improve the quality of medical publications

The focus of this session, featuring representatives from major publishers, was practical advice and insights into improvements in the publishing process. Terry Materese (Elsevier) moderated the session and was joined by Chris Baumle (Elsevier), Jan Seal-Roberts (Adis, Springer Healthcare Ltd) and Caroline Halford (Adis, Springer Healthcare Ltd).

The key discussion points and take-away messages are summarised below; many of the themes that emerged aligned with focus topics for this year’s meeting.

Authorship and peer review

  • Author transparency and acknowledgement of author contributions is critical; the use of systems such as CRediT should be encouraged – as discussed during Monica Bradford’s talk earlier in the day).
  • A blockchain initiative is currently being trialled by Springer Nature to support the peer review process. The aim of the initiative is to create a platform where all review activity is deposited in a blockchain owned by the initiative (currently Springer Nature, Digital Science, and ORCiD), with the advantage that every deposit will be verifiable and auditable, thereby increasing transparency and reducing risk of manipulation.
  • Elsevier has developed a Reviewer Recognition programme whereby peer review is a measurable research output for which credit is given. Both Springer Nature and Adis journals reward reviewers by waiving the article processing charge if they submit articles to one of their journals.

Preprints and data sharing

  • A number of preprint services are now available, including bioRxiv (covered in the keynote presentation earlier in the day). Elsevier supports preprints and allows authors to share their preprints at any time on arXiv or RePEc. Other publishers, including Springer Nature, are also starting to take this approach. With the increased popularity of preprint services it will be important to update anti-plagiarism software such as iThenticate, so that preprints are not classified as plagiarism.
  • Elsevier and Springer Nature have data sharing policies in place regarding the deposit of research data in relevant repositories and the subsequent citation/linking of the dataset in the article. Data in Brief (Elsevier) provides researchers with an avenue through which they can share and reuse datasets by publishing data articles.

Other themes

  • Elsevier offer an Article Transfer Service whereby, if an article is deemed unsuitable for publication in a chosen journal, the editor may suggest submission to an alternative journal. The transfer can happen in minutes, negates the need for reformatting, and reduces reviewer time/burden as previous reviewer comments are pulled through. Springer Nature offers a similar process (Transfer Desk).
  • The use of social media is encouraged by Elsevier and Springer Nature; both publishing houses provide guidance for authors and editors.
  • Readership can be boosted by article enrichments including plain language summaries, audio slides, graphical abstracts, video abstracts and video articles.

The presentation concluded with a busy question and answer session in which the panellists informed delegates that video articles/abstracts do not have to be ‘polished’, article enrichment items adopt the same copyright as the main article, preprints remain available indefinitely, and that articles with digital features are downloaded more often than those without.

Successful integration of enhanced digital technology into scientific congresses

There are an increasing number of options available for enhancing the content of congress posters and presentations. In this session, John (Zeke) Czekanski (Fishawack US), Kimberly Della Penna (Merck) and Travis Hicks (American Society of Oncology [ASCO]) described the digital tools currently being utilised at meetings, future trends, and the pros and cons of these options in practice.

Czekanski began by providing a snapshot of current experience with digital tools and formats, based on a survey of ISMPP members and client service teams. Results revealed that most respondents had utilised various types of enhanced technology, including digital posters, virtual/augmented reality, meeting apps and QR codes. Although most considered these digital offerings to be beneficial to the end user, their value was not typically measured, suggesting that there was scope to better understand the benefits of these formats.

Della Penna summarised available technologies that offer improvements on traditional presentation types. These include the use of QR codes in oral presentations or posters, which link the user to the presentation slides, e-poster, or additional content. Unlike paper posters, e-posters can easily be made available after presentation at a congress. Videos, animation or audio commentary can be embedded within e-posters to provide greater understanding of the work, but can be expensive to develop and require advanced planning.

Hicks described ASCO’s approach to digital meeting tools. At the 2017 annual meeting, almost 3,500 e-posters and videos were accessible via the ASCO Meeting Library at digital kiosks during the congress and online afterwards. Despite the flexibility and portability offered by digitally presented content, ASCO continues to offer physical versions of posters as many attendees prefer this approach. Hicks concluded that the decision on how to consume congress material should be left to the attendee and that no single approach should be forced as it would be met with resistance.

Following this presentation, delegates had the opportunity to attend one of three parallel sessions, two of which are summarised below. The third session provided an overview of the ISMPP Certified Medical Publication Professional (CMPP™) programme; more details of which can be found here.

Incorporating value into planning for early development compounds: the impact of market access trends

Emerging trends in market access and the implications for publication planners was the topic of this comprehensive presentation by Shana Traina (Janssen) and Kimberly Dittmar (Cello Health Communications). Coined the fourth ‘hurdle’ in drug development (after safety, efficacy and quality), market access is driven by many factors, as well as by payer questions regarding the value and budget impact of a drug. These include: therapeutic effectiveness, quality of evidence, perceived medical need, medical appropriateness, cost effectiveness, potential for underuse of overuse, and political expediency.

The value and budget impact of a therapy are usually assessed by governments using health technology assessments. When there are insufficient data to support value, access to a drug can be severely restricted based on clinical (eg limited to patient subgroups) and economic (eg managed entry agreement) considerations. The presentation continued with a review of the International Society for Pharmacoeconomics and Outcomes Research ’s top 10 trends in health economics and outcomes research (HEOR) and their potential impact on market access.

In the next part of the session, a case study illustrated how market access considerations (eg disease burden, clinical management, payer perspectives) can influence the scientific platform and publication plan. The example highlighted the benefits of incorporating value measures, market access factors, and HEOR data generation and dissemination plans early in the clinical development programme. The presentation concluded with a call to action — to engage HEOR stakeholders early in the drug development and publication planning processes.

Permission granted? The ins and outs of global copyrights

In this session, moderated by Sharon Suntag (IQVIA), Liz Bilodeau and Stephen Garfield (both from Copyright Clearance Center [CCC]) delved into the complex topic of copyright laws and permissions, and how the principles differ globally.

Bilodeau introduced the CCC, the basic principles of copyright, and walked delegates through Code § 106 – Exclusive rights in copyrighted works — the US guidance that covers copyright related to reproduction, distribution, derivative works and other aspects of reusing content. There are no defined international copyright laws; laws vary from country to country, and permission to use copyrighted work often excludes reuse in other countries and languages.

Exploring the complexities of global copyright in more detail, Garfield outlined how intellectual property and content sharing are guided by various policies and laws:

  • treaties and agreements: eg BERNE, other treaties administered by the World Intellectual Property Organization, and TRIPS
  • national laws: the two broad approaches are termed ‘continental’ in which personal rights reside with the author and are not transferred, and ‘Anglo American’ in which an economic approach to copyright and set arrangements between society and content creators is adopted
  • licenses: various models exist in different countries, including statutory licenses (eg as in Germany), blanket mandates (UK, Canada), and voluntary licenses (US). Collective licensing models also exist; in these models reproductive rights organisations, such as the CCC, license the use of content from many rights holders, collect royalties, and distribute them to the rights holders
  • individual company policies.

Garfield continued by discussing considerations related to open access, creative commons licenses, and the importance of obtaining permission to reuse or repurpose content. Key takeaways from the presentations are for organisations to appoint a copyright expert, create/update and communicate copyright policies, and provide effective licensing solutions for employees. Additional resources around copyright and permissions are also accessible on the CCC website.

Following the parallel sessions, delegates could attend one of 10 roundtable discussions, which afforded the opportunity to have an in-depth conversation on their chosen topic in a smaller group setting. Day 2 concluded with the annual evening networking session.

Increasing speed, efficiency and transparency in medical publishing through open access

Day 3 of the meeting began with opening remarks by Chris Winchester (Oxford PharmaGenesis and new Chair of ISMPP Board of Trustees).

The proportion of articles published open access is increasing. In this panel discussion, Valerie Philippon (Shire), David Sampson (ASCO), Maria Alu (Columbia University Medical Center) and Moderator, LaVerne Mooney (Pfizer) shared their experiences with open access from a pharma and non-pharma perspective.

Mooney began by giving an overview of open access, the current options available and the advantages it can bring; including speed of dissemination, accessibility to research and increased transparency. A poll revealed that most of the audience considered that an article being free to read was enough to qualify it as open access, whereas others considered the ability to re-use the article either non-commercially or without restrictions was important; most considered an article processing fee of less than $3000 a reasonable price for publishing open access. While governmental and charitable research funders increasingly mandate open access publishing, it is pharma that funds over half of all medical research.

To date, Shire is the only pharmaceutical company that has introduced a mandatory open access policy. Philippon discussed the impact of this policy, which was introduced in January this year, and has seen an increase from 75% to 100% of articles reporting Shire-funded research being published open access. This has resulted in an increase in expenditure on publications, but this was considered reasonable compared with other methods of disseminating clinical data (eg cost of travelling to and presenting at conferences).

Alu described her experience at Columbia University which, having implemented its open access policy in 2011, has led the way by mandating all faculty members make their research articles freely available. Alu described some of the real and perceived barriers to open access that she has encountered, such as the perception that fully open access journals are lower tier than society-led journals, which are often the first point of call for many authors.

During questions from the audience, Sampson was asked why the Journal of Clinical Oncology (J Clin Oncol; Official Journal of ASCO) did not offer open access for industry-funded research. Sampson stated that this was in part due to concerns over the re-use of content for commercial purposes. He continued to say that J Clin Oncol may in future offer open access with non-commercial use options for these articles and the journal is currently reviewing its policy. In response to a comment about the low acceptance rate for J Clin Oncol, Sampson directed the audience to potential alternatives, including the new sister journals, JCO Precision Oncology and JCO Cancer Informatics.

Member oral presentation: patient lay summaries in biomedical journals: what and how much is currently available?

Ramji Narayanan (SIRO Clinpharm Pvt. Ltd) presented findings from his study that investigated the current usage of patient lay summaries, or plain language summaries (PLS), by biomedical journals. PLS are intended to communicate vital research using simplified language, without jargon or technical terminologies, aimed towards a non-expert audience.

The study found that 44 biomedical journals currently publish PLS. These were often not mandatory and word count restrictions ranged from a single summary sentence up to 700 words; the most common was 250 words. Guidelines for writing PLS were often not given and in some cases the journal’s editorial team provided the PLS. Analysis of readability using online tests revealed that many summaries were pitched at too high a level, suggesting more patient-friendly language was still needed. Six of the journals that published PLS encouraged authors to involve patients in the writing or reviewing of the summaries, while one journal mandated patient involvement.

Narayanan concluded that there was a need to streamline PLS reporting practices among journals and maintain a consistent style and reading level. He suggested the Plain Language Expectations for Authors of Cochrane Summaries could be adapted for this purpose.

Following the oral presentation, delegates could attend one of three parallel sessions or a guided poster tour; the parallel sessions are summarised below.

“Mind the gap”— life cycle management and HEOR integration

Kristen Quinn (Peloton Advantage) and Ilia Ferrusi (Novartis Oncology) discussed how gap analysis can be used throughout the life cycle of a product and the benefits of incorporating HEOR and real world evidence.

Quinn began by differentiating gap analysis, which evaluates deficiencies in specified topics, from other similar types of research such as situation or competitor analysis. Quinn gave examples of how gap analysis can be used to answer different questions throughout a product life cycle. For example, during early stages of clinical development a gap analysis may assess the unmet needs of existing management strategies, whereas closer to product launch, there may be specific questions pertaining to patient subgroups that can be answered. Quinn recommended that analyses are routinely repeated so that new and evolving data can be used to fill existing gaps and to identify new ones. Quinn emphasised the importance of fully understanding the research question to ensure the most appropriate analysis is carried out, using quality source data, and ensuring results and their implications are communicated effectively to the client/team.

Ferrusi described how traditionally HEOR might be carried during the latter stages of a product’s life cycle to determine, for example, its relative value as an intervention. Increasingly, however, HEOR is being incorporated earlier in product development to establish unmet needs from an economic perspective, such as the cost-effectiveness of existing therapies or the economic burden of disease management. Ferrusi discussed how integrating this type of evidence into gap analysis can inform HEOR strategy and evidence-generation plans and identify opportunities for differentiation. She concluded by summarising tools that are currently available to assess the quality of HEOR and real world evidence, with a reminder to utilise non-traditional or ‘grey literature,’ in addition to traditional publications.

Lay summaries for biomedical journals

Clinical trial lay summaries will soon become a legal requirement in the EU. Although the same is not true for lay summaries published in biomedical journals, their adoption should be encouraged. In this session Jan Seal-Roberts (Adis, Springer Healthcare Ltd) and Tom Rees (Oxford PharmaGenesis) provided practical guidance on the development of lay summaries and discussed potential trends from a publisher’s perspective.

Lay summaries for peer reviewed publications facilitate readability and understanding of a research article by non-expert readers such as patients, caregivers, tax-payers, non-specialist clinicians, and by ‘time-poor’ specialists. Many patients and caregivers are familiar with scientific publications but with abstracts becoming more difficult to read, there is a growing need for publication-specific lay summaries. After the main differences between publication lay summaries and clinical trial lay summaries were presented (eg rationale, scope, endpoints included, format/style and reading age), the presentation focused on the increasing uptake of publication lay summaries.

Several journals now accept or require lay summaries, including PLOS Medicine, PNAS and Annals of the Rheumatic Diseases. Moreover, all Adis articles can be accompanied by a lay summary, which is peer reviewed. The British Journal of Dermatology now publishes a plain language summary for every accepted paper. Seal-Roberts and Rees emphasised the importance of lay summaries being made open access so that patients can find them. However, although this is the case for some journals, eg British Journal of Dermatology and those published by Adis, lay summaries are not always free accessible. It was noted that a number of non-journal opportunities for lay summaries also exist, including KUDOS, scienceOPEN, The Conversation, ATLAS of science and figshare.

Some best practice tips for developing a publication lay summary were recommended:

Seal-Roberts and Rees concluded that publication lay summaries complement those mandated for clinical trials and have the potential to boost the impact of published research to a wider audience.

Posters of the future

In this highly engaging and interactive session facilitated by Eline Hanekamp, Niina Nuottamo, Hester van Lier and Shanthi Voorn (all from Excerpta Medica), delegates were split into groups and tasked with developing a poster. Delegates were informed that the poster must include the traditional Introduction, Methods, Results and Conclusions sections, but should be designed in a non-conventional style (eg be highly visual, interactive, employ infographics or augmented reality, have increased white space, or adopt the inverse pyramid approach to the flow of information).

After each group developed their poster all delegates reviewed all posters and discussed what made each poster effective. Examples of best practice included:

  • minimal use of words
  • increased focus on the results
  • placing the results first or more prominently on the poster
  • having a good balance of figures versus text
  • having a highly visual conclusions section
  • including a patient lay summary
  • including QR codes linking to different versions of the poster tailored to specific audiences (eg clinicians, patients, payers).

The facilitators concluded that posters are an effective communication tool. They suggested that putting some of the discussed approaches into practice would lead to a highly engaging and impactful poster that stands out from the crowd.

News you can use

In this rapid-fire session, delegates heard updates on a number of hot topics in medical publishing, as summarised below.

Does pharma publish clinical trial data? What’s NCT number got to do with it – LaVerne Mooney (Pfizer)

Mooney reminded delegates that publications and press articles have, in the past, reported that <50% of studies do not get published, although recent analyses have shown these rates have improved. A factor that can hinder such analyses is matching a publication to its ClinicalTrials.gov record. Mooney recommended that the clinical trial registration number is included in the abstract of all submitted/published articles and, if it is not, authors should contact PubMed or the publisher to get abstracts updated/or corrected.

Right to Try bill – Juliana Clark (Amgen and Outgoing Chair of ISMPP Board of Trustees)

This US bill, which controversially allows terminally ill patients to receive investigational drugs not yet approved by the FDA, was passed by the House of Representatives on 21 March 2018. Commenting on the potential impact for medical publication professionals, Clark noted that trial recruitment may slow down, however, the bill could also result in additional relevant publications of case reports.

Patient engagement update: GRIPP2 guidelines (patients as manuscript reviewers) – Jan Seal-Roberts (Adis, Springer Healthcare Ltd)

Seal-Roberts provided an overview of GRIPP2 (Guidance for Reporting Involvement of Patients and the Public); guidance that aims to improve the quality, transparency and consistency of patient and public involvement in health and social research. Research is ongoing to assess the extent of patient and public involvement in research and medical publications. For example, a 5-year randomised controlled trial that will assess whether the lay public can enhance the peer review of journal articles.

GAPP announcement – Jackie Marchington (Caudex)

As of 2 May 2018, and after 6 years of service, the Global Alliance of Publication Professionals (GAPP), is stepping down and handing back responsibility to professional bodies, including ISMPP and the American and European Medical Writers Associations (AMWA and EMWA). The GAPP website and email address will be maintained for another ~1 year. The full press release is here.

Keynote address: the truth about doctors

In the final keynote of the meeting, a thought-provoking presentation by Hilary Gentile (McCann Health) looked at the current perceptions and roles of doctors as healthcare providers. She began by summarising some of the recent trends in the healthcare space. For example, with ~80% of the world’s health data reportedly generated in the last 2 years, doctors have been overwhelmed by information. The increased volume and access to such information also means that patients are much better informed. Additionally, time and cost pressures within health services mean doctors spend very little time with patients (2.3 minutes per patient visit for community doctors in the US) and more time on paperwork.

Gentile reported results from the ‘Truth about Doctors’ study undertaken by McCann Health. This survey of nearly 2,000 physicians from 16 countries aimed to understand the views of doctors on the “expectations and reality” of their roles. There were some surprising findings! Thirty-four per cent of survey respondents thought they could have become doctors without any training, while 53% of 18–34 year olds believed that technology could eliminate the need for doctors in the future. Doctors reported that they entered the profession to make a difference, but many felt a great sense of frustration with the current practice of medicine. This discontent is not without consequences: 66% of doctors had trouble sleeping and 58% reported marital problems.

The study showed there is a disconnect between doctors’ expectations and the reality of doctoring – but what are the solutions? Some of the insights offered by Gentile included:

  • embrace and better harness the wealth of available information
  • humanise new technologies to advance the doctor–patient shared care paradigm
  • focus on all healthcare professional roles, not just doctors, and create new multi-stakeholder pathways to care
  • empower doctors to regain mastery in patient care.

Gentile concluded that, with 80% of doctors in the survey stating they’d take the same career pathway again, the good news is the profession is unlikely to become extinct! However, the role of doctors in the provision of healthcare will almost certainly change.

Closing the meeting, Al Weigel (ISMPP President & CEO) thanked the programme, abstract and global workshop committees for their contributions, and the meeting sponsors and exhibitors. He concluded by informing delegates that an ‘ISMPP University’ webinar of the meeting highlights will be held in June, and by reminding delegates of the upcoming ISMPP West (11–12 October 2018), ISMPP EU (22–23 January 2019) and Annual (15–17 April 2019) meetings.

——————————————————–

By Aspire Scientific, an independent medical writing agency led by experienced editorial team members, and supported by MSc and/or PhD-educated writers


]]>
https://thepublicationplan.com/2018/05/30/meeting-report-summary-of-the-14th-annual-meeting-of-ismpp-part-2/feed/ 0 5111
PubMed Commons now discontinued https://thepublicationplan.com/2018/03/09/pubmed-commons-now-discontinued/ https://thepublicationplan.com/2018/03/09/pubmed-commons-now-discontinued/#respond Fri, 09 Mar 2018 09:42:23 +0000 https://thepublicationplan.com/?p=4917 PubMed_Commons_closure

The US National Library of Medicine (NLM) recently announced the closure of PubMed Commons. It was hoped that the service, launched in 2013, would provide a forum for scientific discussion online. Authors of articles indexed in PubMed were able to register and then add public comments to other PubMed-indexed articles. Interaction among members was encouraged, via methods such as rating the usefulness of comments and inviting other members to join discussions.

Despite this, participation has been low and comments have been added to only 6000 of the 28 million articles on PubMed, leading the NLM to withdraw the service. New comments could not be added after 15 February 2018 and, as of 3 March 2018, existing comments can only be seen by downloading them from the National Center for Biotechnology Information (NCBI) website.

Is the failure of PubMed Commons indicative of a wider lack of engagement with medical publications? Or is it simply a reflection of the choices that must be made by time-poor researchers?

——————————————————–

Summary by Hannah Mace MPharmacol, CMPP from Aspire Scientific


 

]]>
https://thepublicationplan.com/2018/03/09/pubmed-commons-now-discontinued/feed/ 0 4917