Selective publication – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 17 Sep 2025 13:17:41 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Selective publication – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 The vital role of inclusive publishing in advancing science https://thepublicationplan.com/2025/09/17/the-vital-role-of-inclusive-publishing-in-advancing-science/ https://thepublicationplan.com/2025/09/17/the-vital-role-of-inclusive-publishing-in-advancing-science/#respond Wed, 17 Sep 2025 13:17:39 +0000 https://thepublicationplan.com/?p=18301

KEY TAKEAWAYS

  • Inclusive publishing recognises the value of all validated research in enhancing scientific reproducibility and progress.
  • Publishers must embrace inclusive practices to reflect diversity within the scientific landscape.

Inclusive journals value null results, preliminary data, and experimental design papers, which promote reproducibility and can hasten innovation. Unlike selective journals, which prioritise ‘high impact’ discoveries, inclusive journals recognise that research does not need to be ground-breaking to be an advancement. In a Springer Nature article, Ritu Dhand discusses the benefits of inclusive publishing.

COVID-19: a case study

Dhand highlights how the COVID-19 crisis created an unprecedented need for peer-reviewed science. Journals responded by adopting inclusive publishing practices, recognising the importance of preliminary data and innovative methods. The rapid dissemination of pilot studies and null results enabled scientists worldwide to focus precious time and effort on pushing unexplored frontiers. Inclusive publishing proved pivotal in an extraordinary global effort to compress drug discovery timelines from years to months. However, these inclusive practices faded after the pandemic.

The price of selectivity

Dhand notes that 50% of research is unpublished. Rather than lacking scientific rigour, most rejections occur because journal editors consider the research to lack significance. A study prepared for the European Commission estimated that in 2018, €26 billion was wasted on duplicated research in Europe alone.

50% of funded research is unpublished. Rather than lacking scientific rigour, most rejections occur because journal editors consider the research to lack significance.

Value beyond citation metrics

Inclusive journals often publish a high number of papers, leading to lower impact factors. However, the value of the research can be measured by other metrics. For example, over a third of Springer Nature’s inclusive content addresses the UN Sustainable Development Goals, demonstrating its societal impact.

Diversity in research publication

Inclusive publication practices also involve increasing the diversity of authors and countries contributing research. Dhand highlights that a similar proportion of research publications are from Western Nations and Asia, yet editorial boards and reviewers remain Western dominated. As key decision makers, individuals in these roles should reflect the diversity of the research communities.

Dhand acknowledges that selective journals will continue to offer a platform for ground-breaking research, but highlights the need for widespread inclusive publication practices to satisfy the evolving needs of science and society.

—————————————————

Do you believe selective publication practices are inhibiting scientific advancement and innovation?

]]>
https://thepublicationplan.com/2025/09/17/the-vital-role-of-inclusive-publishing-in-advancing-science/feed/ 0 18301
The BMJ pushes back on “anti-gender ideology” https://thepublicationplan.com/2025/04/30/the-bmj-pushes-back-on-anti-gender-ideology/ https://thepublicationplan.com/2025/04/30/the-bmj-pushes-back-on-anti-gender-ideology/#respond Wed, 30 Apr 2025 14:56:51 +0000 https://thepublicationplan.com/?p=17684

KEY TAKEAWAYS

  • A recent instruction from the Trump administration ordered CDC scientists to withdraw articles from scientific journals that include “forbidden terms” related to gender.
  • BMJ editors urge other journals to maintain the integrity of scientific research by resisting “bow[ing] to political or ideological censorship”.

A recent instruction from the Trump administration directed US Centers for Disease Control and Prevention (CDC) scientists to withdraw or retract any submitted (but not yet published) articles that include “forbidden terms” such as gender, transgender, LGBT, or transsexual. In an opinion article published in The BMJ, Jocalyn Clark (International Editor) and Kamran Abbasi (Editor-in-Chief) warn of the dangers of blocking important medical information from publication.

Censoring sex and gender in published research

Clark and Abbasi explain that sex and gender data are critical for understanding differences among populations and individuals from outcome and experience perspectives. The authors emphasise that blocking gender-related data is not only harmful for patients, but compromises the integrity of scientific research as a whole. They believe that attempting to censor these data is a political maneuver based on “anti-gender ideology” and “a return to fundamentalist values”, in line with the recent disappearance of other politically charged content on topics like immunisation and contraception from CDC websites and datasets.

“Blocking gender-related data is not only harmful for patients, but compromises the integrity of scientific research as a whole.”

Violation of publication ethics

Clark and Abbasi highlight several ways in which the instruction breaches publication ethics:

  • Being at odds with the reporting standards adhered to by medical journals, such as the Sex and Gender Equity in Research (SAGER) guidelines.
  • Conflicting with authorship criteria, which not only ensure that authors are credited for their work, but are accountable for it. Removing an author who qualifies for authorship, even at their own request, constitutes ghost writing.
  • “Muzzling” important medical data. Although authors are within their rights to withdraw submitted papers from a given journal prior to publication, the data should still be published.

The authors call upon journal editors to resist the instruction on the grounds that they have a “duty to stand for integrity and equity”, which supersedes any “political or ideological censorship”.

————————————————–

Do you agree that authors and editors complying with the instruction would compromise the integrity of scientific research?

]]>
https://thepublicationplan.com/2025/04/30/the-bmj-pushes-back-on-anti-gender-ideology/feed/ 0 17684
Is abstract peer review affected by institutional and geographical bias? https://thepublicationplan.com/2021/09/23/is-abstract-peer-review-affected-by-institutional-and-geographical-bias/ https://thepublicationplan.com/2021/09/23/is-abstract-peer-review-affected-by-institutional-and-geographical-bias/#respond Thu, 23 Sep 2021 08:54:28 +0000 https://thepublicationplan.com/?p=10008 As the quantity of academic publications and subsequent data overload continue to grow, deciding what to read has become increasingly difficult. The status of an academic institution or country are recognised ‘cues’ used by some academics to determine the most important, or most reputable, research. This feeds into the  halo effect, which suggests that publications from ‘higher-status’ locations are viewed more favourably, potentially influencing the peer-review processes. A recent study, published in eLife, put this theory to the test by examining the impact of author’s geographical location and academic affiliation on the scholarly evaluation of scientific abstracts.

The study by Professor Mathias Wullum Nielsen and colleagues recruited over 4,000 research-active academics from 6 disciplines: astronomy, cardiology, materials science, political science, psychology, and public health. Participants were asked to review adapted or newly created abstracts with fictive authors and affiliations, appropriate to their respective fields. The discipline-specific abstracts, which differed only on author country and institution, were assessed on the following criteria:

  • originality, credibility, significance, and clarity
  • whether the reviewer would decide to open the full text and continue reading based on the abstract
  • whether the reviewer would choose to include the abstract for a conference presentation.

The authors discovered weak and inconsistent evidence for bias against countries or institutions of perceived lower scientific status. Furthermore, and contrary to expectations, peer assessors from some disciplines (political science) were less likely to consider an abstract appropriate for a conference if its author was affiliated with a more prestigious versus less prestigious US university.

The authors discovered weak and inconsistent evidence for bias against countries or institutions of perceived lower scientific status.

The weak evidence for status bias reported by Prof. Nielsen might reassure researchers that abstracts are likely to be judged on merit. However, the authors stated that their findings should be viewed in light of several limitations, noting that in real-world settings, abstracts might be rejected before being read if reviewers perceived the author’s institution or country of origin to be of lower scientific status. This, in turn, might reduce the likelihood of these abstracts being cited. Furthermore, status bias may still exist in other forms of peer review, including evaluation of journal articles or grant applications. The authors encouraged further research into additional factors that might influence geographical or institutional status hierarchies in the peer-review process.

—————————————————–

Do you believe academic peer review is influenced by country- or institution-related status bias?

—————————————————–

]]>
https://thepublicationplan.com/2021/09/23/is-abstract-peer-review-affected-by-institutional-and-geographical-bias/feed/ 0 10008
Study reveals lack of consistency in reporting of COVID-19-related preprints https://thepublicationplan.com/2021/05/28/study-reveals-lack-of-consistency-in-reporting-of-covid-19-related-preprints/ https://thepublicationplan.com/2021/05/28/study-reveals-lack-of-consistency-in-reporting-of-covid-19-related-preprints/#respond Fri, 28 May 2021 14:25:40 +0000 https://thepublicationplan.com/?p=8957

Preprints are used to expedite research findings into the public domain, but their inherent uncertainty is not well understood outside the scientific community. During the COVID-19 pandemic, the need for credible and accessible heath information caused a dramatic increase in the use of preprints by media outlets. However, a recent study published in Health Communication found that almost half of 457 news articles from 15 media outlets that cited COVID-19-related preprints did not frame the preprint research as uncertain in any way.

Using two preprint servers (medRxiv and bioRxiv), Altmetrics and coding, Dr Alice Fleerackers and colleagues analysed the use of COVID-19 preprints in the media coverage of the early months of the pandemic, and assessed if and how the uncertain nature of this type of research was explained. Although almost all the news stories hyperlinked to a preprint, there was great variation in how the preprint content was covered. While 80.5% identified the content mentioned as research, nearly 20% included a hyperlink with no explanation of where it linked to or indication that it linked to a preprint.

Regardless of the use of hyperlinks, more than half of stories highlighted the scientific uncertainty associated with preprints, using a variety of framing devices, such as:

  • explaining that the content was unreviewed
  • identifying the content as a preprint
  • adding that further verification was needed
  • noting that the work was preliminary.

The authors speculate that media outlets may avoid adding explanations around preprints because simply using the word ‘research’ gives credibility to the reported content, and there may be reluctance to emphasise any uncertainty. They may also want to avoid alienating audiences who are not familiar with evaluating scientific content. However, it is encouraging that several outlets in the study reported on preprint research with adequate explanation on the uncertainty surrounding science and peer review. As the pandemic has reminded us, it is critical that the public have access to scientific research, and that it is reported by media outlets in an accurate and transparent way.

——————————————————–

Do you think media outlets are responsible for explaining the scientific uncertainty around preprints when reporting their content?

——————————————————–


]]>
https://thepublicationplan.com/2021/05/28/study-reveals-lack-of-consistency-in-reporting-of-covid-19-related-preprints/feed/ 0 8957
Research integrity in the COVID-19 era: insights from Retraction Watch co-founder Ivan Oransky https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/ https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/#respond Wed, 17 Mar 2021 11:32:43 +0000 https://thepublicationplan.com/?p=8283

Ivan Oransky has been at the forefront of efforts to highlight research integrity issues for over a decade, co-founding Retraction Watch in 2010 to track and publicise retractions in the scientific literature. Following his presentation at the 2020 European Medical Writers Association (EMWA) symposium, we spoke to him about retractions during the COVID-19 pandemic and steps he believes should be taken to tackle research integrity challenges in the future.

First of all, COVID-19 is having a huge, ongoing impact on our daily lives and on scientific research – reflected in the huge number of COVID-19-related publications. At the same time, Retraction Watch’s list of retracted COVID-19 papers continues to grow. Which of the COVID-19-related retractions to date do you think have been the most notable, and what do these cases tell us about current practice in scientific publishing?

“I don’t know that I would choose any particular COVID-19-related retraction as most notable – I suppose that’s like asking which of your children is your favourite. There are certainly the ones that gained the most attention – if I had to pick one, it would be the Lancet paper about hydroxychloroquine that was based on a very questionable (at best) dataset from a company called Surgisphere. I think that paper captured the most attention, and close behind it was a New England Journal of Medicine (NEJM) paper that was also based on those alleged data, but wasn’t about hydroxychloroquine so didn’t capture quite so many eyeballs. Those are the retractions where I think a lot of people had a Casablanca “shocked, shocked!” moment, with the idea that, somehow, this was completely different from anything that’s ever happened in science before. And that’s just nonsense – complete revisionist history.

I think it’s more important, or useful in a way, to look at the whole pattern. I wouldn’t call these data so much, but there have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.

There have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.

However, 10 of these retractions were because Elsevier published manuscripts twice that authors had only submitted once. What that speaks to is the rush, or the fast pace, of publishing in the COVID-19 era. The fast pace isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years – although I would argue that there have been some strides in that direction.

The fast pace of publishing in the COVID-19 era…isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years.

To me, it’s not a particular retraction that’s important – rather the phenomenon that everyone’s rushing and there’s a lot of sloppiness. If anything, I’d say that the proportion of retractions due to misconduct is much lower than you might see in a typical dataset of retractions. I don’t know what to make of that yet, and it could be that people just haven’t found the cases of misconduct so far, but I think that that’s worth paying attention to. It really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.”

The proportion of retractions due to misconduct is much lower than you might see in a typical dataset…it really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.

While journals have acted quickly to retract some COVID-19-related publications, in general, the pace of investigation and retraction is very slow. However, you’ve recently highlighted a “double-standard” involving rapid retraction when papers draw negative attention on social media. How should journals prioritise their investigations to address allegations in a timely way?

“Well, I think that what journals and publishers should do is actually prioritise investigations. Although some argue that the problem is certain papers being retracted before other papers, the problem is that not enough papers are being retracted, full-stop. There are countless papers being flagged – whether that’s on PubPeer, through correspondence with journals or by scientific sleuths like Elisabeth Bik – where journals are doing nothing. Maybe they’re investigating the cases and it’s just taking them a long time – but why is it taking them so long?

One positive development over the past few years is that some journals are actually hiring entire staffs to look at allegations and to try to catch issues that might lead to retraction before articles are published. Those are the journals and publishers that I think everyone should emulate, such as the Journal of Biological Chemistry, PLOS ONE and FEBS PRESS.

Some journals are actually hiring entire staffs to look at allegations and to try to catch issues…before articles are published. Those are the journals and publishers that I think everyone should emulate.

So, to me, the issue is not so much whether we should retract some papers before others. The more important question is ‘why are journals not prioritising investigations, full-stop?’ If there has to be some prioritisation, then we should retract papers with fatal flaws that seem to be doing harm, or have the potential for doing harm, first. The problem is that then nobody will do anything about all of the other papers. I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade. If you give journals and publishers an excuse, or a rationalisation for why they’re not getting to something they should be getting to, you’re creating more of an issue, and journals know that.”

I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade.

Recently, Retraction Watch discussed a Scientific Reports article retracted following a post-publication peer review round requested by the Editor. Are changes to peer review processes needed to avoid this kind of retraction? Do you think increasing adoption of post-publication and open peer review processes will impact retraction rates?

“I think whether changes are needed to peer review processes depends on what your goal is. Is your goal to prevent retractions, or is it to actually have a transparent publication process that reflects how science works instead of having papers be the be all and end all in terms of promotions, tenure, and so on? I think you have to decide what your goals are, and once you’ve decided this, you can create a system that makes sense.

Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t.

Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t. In my opinion, journals have never done a good job of answering this. I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the editors were really forced to admit that their peer review systems were not well-equipped for those papers, although the journals approached this in different ways. These lessons are a good thing, but it’s not as if these issues with peer review only happen when there’s a retraction that catches everyone’s attention.

I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the Editors were really forced to admit that their peer review systems were not well-equipped for those papers.

The paper in Scientific Reports caught everyone’s attention because of what it’s about and the conclusions [the paper made links between obesity and dishonesty], but papers are slipping through like this all the time. Journals need to acknowledge this and provide their peer review reports. I do think that, even if it’s anonymised, publishing peer review comments is a good idea so you can have some faith in the process, see what happened, and believe what happened. I’m not sure that there’s an alternative to journals acknowledging the limitations of peer review processes – I think that they just have to be honest. At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true, and these cases are not unique.”

At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true.

Retractions can occur for any number of reasons, but retraction notices (if they appear at all) can be vague about the underlying cause. How should a retraction ‘ideally’ be conveyed? Is a nomenclature needed, particularly to help protect authors when the retraction is due to honest error?

“Over the years, I’ve actually grown to be increasingly opposed to a nomenclature for various ‘types’ of retraction. I think that in every case I’ve seen where nomenclature is involved, either journals make category errors or they use nomenclature as weasel words. Elsevier have used ‘withdrawn’ in certain cases (and other publishers have followed suit in some ways), and really this is an excuse or rationale not to include any information about why the paper was withdrawn or retracted. That’s a step way backwards. We all make category errors – I make category errors probably every day, but I hope I correct them. For whatever reason, the notion that what we really need is a better taxonomy has persisted – but how that is going to solve the problem of lawyers getting involved in the process and obfuscating reality, or journals not including reliable information in retraction notices, I don’t understand. It won’t help anyone if you still don’t know what actually happened.

What should actually happen – and this is borne out in the economics literature – is that retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear, as sometimes people have muddied the waters. If that’s the case, then say so: ‘we don’t know what’s happened here because lawyers on either side have been bickering for a year about this – but we feel we should tell readers anyway’. That’s a pretty honest way to go, unlike the approach of not saying anything.

Retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear.

For individual researchers, it’s very clear that if you retract a paper for fraud, dishonesty or misconduct, you have a retraction penalty, and your citations decline. Maybe your whole subfield’s citations decline as you bring everyone down with you. When you retract a paper due to honest error and the retraction notice very clearly explains this, you don’t see that decline. One study says you might even see a bump, although that hasn’t been replicated.

So, clarity in retraction notices is what’s needed. I think the notion that we can classify everything with a set of words – that will be argued about forever anyway – is the wrong way to go.”

Even after retraction, papers continue to be cited. Do journals need to do more to publicise retractions, and how can authors make sure they don’t fall into this trap?

“Again, it depends what journals want. Do they want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted? I go by the old adage ‘never ascribe to malice that which is adequately explained by incompetence’, so I’m willing to acknowledge that the lack of action from journals may be due to incompetence rather than being intentional.

Do they [journals] want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted?

There are now countless studies, conducted by librarians and bibliometrics and scientometrics scholars, showing that it can be very difficult to find that an article has been retracted. Journals and publishers are not transmitting the metadata to where they should (whether this is PubMed, Web of Science, etc) and sometimes they transmit the wrong metadata (eg they call something a correction when it’s a retraction). Even on the journal’s own pages or on the PDFs, articles often don’t show up as retracted. Journals should do more, as they’re the ones who end up publishing papers citing retracted work.

Journals should do more, as they’re the ones who end up publishing papers citing retracted work.

So, how can authors make sure they don’t fall into this trap? We created a database that is primarily for tracking retractions and we’re more comprehensive than any database of or containing retractions. At the moment, there are close to 25,000 retractions in our database – that’s almost twice as many as you’ll find in any other similar database. Authors can search for articles one-by-one using our database, if they want, or they can sign up for software suites and bibliographic management software packages that are working with Retraction Watch’s database. If you use Zotero for example, you’ll get an automatic flag every time a paper in your library is retracted. We get notes about this on Twitter all the time from people who didn’t know it existed and find it really helpful – we’re thrilled with that. We’d love the Retraction Watch database to be incorporated into more software packages too. Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work – but these automated processes have become pretty easy to do.”

Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work.

The extent and sophistication of journal targeting by paper mills and scams is ever-increasing. From your perspective, what can be done to tackle this problem and future-proof publishing processes against these attacks?  

“To me, this really takes a two-pronged approach. One prong is to tackle what we know is out there that no-one has seen fit to tackle yet. iThenticate and other software that looks for plagiarism and duplication follow this model: journals and publishers realised there was a lot of plagiarism, someone developed some software, and now everyone uses it. The same could be done with our database of retractions. Right now, we don’t have a good set of software tools that can detect image manipulation or image duplication, for example. We have individuals including Elisabeth Bik who are doing amazing work, but that’s not really scalable and we need a scalable solution. However, these solutions are only looking to fight yesterday’s battles. Meanwhile, the people who came up with these bad practices are coming up with more ‘clever’ approaches and we won’t know what those are until they explode. So, all of this fits into one prong – rooting out problems once we know they exist.

We also need to take a step back and move upstream to what the real issue is, which is the incentive structure. If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals. If you remove that incentive, then nobody’s going to feel a particular need to fake papers, go to a paper mill, or anything else.

If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals.

It’s probably no accident that paper mills tend to be concentrated in places, particularly China, where the incentive structure has been completely warped towards papers for so many years. If we don’t look at these incentive structures, every year or so, another scam will come out.

If we don’t look at these incentive structures, every year or so, another scam will come out.

We wrote about fake peer review back in 2012 – it turns out this hasn’t been eradicated, although it is now easier to detect and has been cut down. We broke a story about selling authorship in Russia, we’ve reported on paper mills – there’s just always something, and there’s always going to be something else. I don’t have the kind of mind to think up what will be next, although I can often find it once it happens thanks to sources like the scientific sleuths. None, or very little, of this will happen if we remove the very pervasive and poisonous incentive structures we have at the moment.”

As noted in the 10 takeaways from 10 years at Retraction Watch, pharma-funded publications account for a low proportion of retractions. You’ve noted that this is unsurprising given the increased scrutiny in pharma versus academia – what changes should academia make to reduce retraction rates? 

“Maybe this is controversial, but I don’t know that we should (certainly in the short or medium term) push to reduce retraction rates. If we mean reduce retraction rates as a proxy for reducing ‘bad behaviour’ – sloppiness or even misconduct – then yes, we should take measures to try to prevent that or to detect it better. There are still a lot of papers that should be retracted but haven’t been, so I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse. I do think that there are lots of steps that academia can take to try to cut down on these bad behaviours – this goes back to incentives, in a large part.

I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse.

On the flipside, I don’t think that we should absolve pharma-funded publications of bad behaviour or misconduct. For those sorts of papers, studies can be set up in such a way to get the desired results, but this is not something that would be considered misconduct or would be a ‘retractable offence’. There are gatekeepers and hoops that studies need to jump through (like Institutional Review Boards), but we shouldn’t assume that those systems are perfect.

Both settings have a lot of work to do – in academia you see behaviours that are ‘retractable offences’ while in pharma, that’s not the case, but research practices can have other negative effects. If universities are interested in lowering the rates of misconduct in their ranks, they need to look inwardly and examine whether they’ve created incentive structures that reward good or bad behaviour.”

Finally, in your opinion, what is the biggest challenge to research integrity right now, and how can this be overcome?

“I’m going to sound like a broken record, but I do think that incentives are my main concern and the thing that needs the most attention. That being said, one of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.

One of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.

You want constructive criticisms and critiques in science – you don’t want them to be ad hominem attacks. The critiques should help move the science and the evidence to a better place. Often, the most critical peer reviews are not necessarily of the papers that are most problematic (or frankly those that shouldn’t have been considered for publication in the first place), but are of papers that disagree with your point of view. I guess there’s a tribalism that cuts in every which way, whether it’s scientific, political, or due to the family tree of where and who you trained with. You end up with a lot of people shouting at each other and ‘creating heat without shedding a lot of light’. In the same way, social media has amplified and exacerbated a lot of issues in terms of politics, world events, conspiracy theories and what have you. Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.

Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.

I’m all for free speech – I think everyone should feel free to speak their mind and I encourage that, even when they disagree with me – but if we don’t figure out how to get away from this tribalism, we’re just going to polarise science even more. If we couple that with all the issues science is facing, whether it’s a real lack of funding, or publish-or-perish incentives, it’s not going to go well.”

Ivan Oransky is Editor in Chief of Spectrum, Distinguished Writer In Residence at New York University’s Carter Journalism Institute, and President of the Association of Health Care Journalists. He is also co-founder of Retraction Watch, which can be followed on Twitter @RetractionWatch. You can contact Ivan at team@retractionwatch.com and follow him on Twitter @ivanoransky.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/feed/ 0 8283
Overcoming barriers to reporting negative data: perspectives from the Center for Biomedical Research Transparency’s founder https://thepublicationplan.com/2021/02/24/overcoming-barriers-to-reporting-negative-data-perspectives-from-the-center-for-biomedical-research-transparencys-founder/ https://thepublicationplan.com/2021/02/24/overcoming-barriers-to-reporting-negative-data-perspectives-from-the-center-for-biomedical-research-transparencys-founder/#respond Wed, 24 Feb 2021 10:36:15 +0000 https://thepublicationplan.com/?p=8151

Both positive and negative results must be made available to allow scientific advancements to be fully understood. However, it is estimated that for clinical trials alone, positive findings are nearly twice as likely to be published as negative or inconclusive outcomes, creating a publication bias. Sandra Petty is an academic and clinical neurologist focusing on epilepsy and its comorbidities, and is founder and CEO of the Center for Biomedical Research Transparency (CBMRT). Following her talk at the 2020 European Medical Writers Association Symposium, we found out more about the CBMRT’s efforts to reduce publication bias through increasing the reporting of negative results.

Firstly, for anyone who is not already familiar with the CBMRT, please would you describe the organisation and its aims?

“The CBMRT is a 501(c)(3) not-for-profit organisation that facilitates transparent reporting of biomedical and clinical research. We aim to ensure that all biomedical and clinical research results – including negative and inconclusive results – are accessible in the interests of patient safety and research efficiency.”

We aim to ensure that all biomedical and clinical research results – including negative and inconclusive results – are accessible in the interests of patient safety and research efficiency.

What inspired you to found the CBMRT?

“Publication bias has been a major issue affecting biomedical and clinical research for a long time. Working both clinically and in research, I could see major issues for research efficiency and patient safety that are related to non-publication of ‘negative’ results. The problem is that many results of well-performed research never see the light of day – particularly when an expected effect is not observed.

The problem is that many results of well-performed research never see the light of day – particularly when an expected effect is not observed.

Negative, inconclusive and replicative results compete against new studies with positive results for limited publication space in the high impact journals. There’s sometimes a perception that publishing negative results may harm career prospects, and/or a view that investing the time to write up the result would yield a low impact factor publication, so time may be better spent focusing on higher impact publication areas. All of this works to the detriment of gaining a complete and balanced understanding of the area of research.

The enormous advances made in our understanding of diseases and therapies to date are a direct result of the quality of research conducted by dedicated scientists. Their work inspires, informs and refines new avenues of enquiry. The ultimate beneficiaries though are patients – clinicians are better placed to optimise treatments and to manage risks. So, the stakes are high if we fail to achieve balanced and transparent reporting of well-performed research (regardless of the results): it leaves us with an incomplete understanding of the state of our field and of our treatments, and it affects the knowledge we share with research participants and patients. And we’re wasting taxpayer and donor money by not sharing the results of well-executed research that they’ve funded.

The stakes are high if we fail to achieve balanced and transparent reporting of well-performed research (regardless of the results): it leaves us with an incomplete understanding of the state of our field and of our treatments, and it affects the knowledge we share with research participants and patients.

So, to address these issues, I co-founded the CBMRT after completing my postdoc. The other motivation in founding the CBMRT was to bring stakeholders from across the biomedical and clinical research transparency ecosystem closer together through annual convenings to facilitate updates on important issues and developments in the field.”

You have previously discussed the causes and consequences of ‘dark data’ in scientific research. Can you outline why it’s so important to publish negative data, and what’s holding researchers back?

“Most scientists will tell you that there is ‘dark data’ sitting in lab books around the world. While we can’t necessarily publish every small experiment, when research is well-performed, a more complete record of what has been done, and what works and doesn’t work, is a really useful tool. The reasons behind dark data are more complex, ranging from opportunity costs of writing up papers to the perception that even if written up, negative/null results might be less likely to be published, or may reflect negatively on a researcher. At its worst, dark data can also include outright fraud or deliberate hiding of results, which is the more extreme end of the spectrum. Institutions often reward high impact factor articles in terms of key performance indicators and tenure conditions, which may lead scientists under time pressure to prioritise their positive findings for publication, sometimes at the expense of other results.

The reasons behind dark data are more complex, ranging from opportunity costs of writing up papers, the perception that even if written up, negative/null results might be less likely to be published, or may reflect negatively on a researcher.

The problem with dark data is that scientists and clinicians cannot review it. They can’t see that a preclinical study showed no effect, or perhaps no effect and a side effect, leading to problems with research translation that might have been addressed or avoided if the publication record had been more complete.

A paper is still one of the gold standard publication methods, allowing authors to more fully explain their work than by simply sharing data or results; it creates context, and a reference point which should be available for systematic review and consideration before new projects (and funding) are commenced.

A paper is still one of the gold standard publication methods, allowing authors to more fully explain their work than by simply sharing data or results.

Addressing this issue and balancing the publication record is ultimately useful for scientists, clinicians and maintaining public trust in the scientific process.”

The CBMRT’s Null Hypothesis Initiative promotes the publication of negative, inconclusive or replicative results, so far partnering with Neurology, the American Heart Association (AHA) journals and Neurotrauma Reports. Do you think it’s important that negative data are published in mainstream journals, rather than venues dedicated to null outcomes?

“When I co-founded the CBMRT, the Null Hypothesis Initiative was my central concept, creating dedicated space for well-performed negative, inconclusive or replicative studies to be published in pre-existing journals – rather than in new standalone journals. Working with established journals allows us to keep arms-length from the review process and remain independent, while also working to promote an awareness of, and a solution to, publication bias. At the same time, we promote the publication of these articles with editors, editorial boards, publishers and authors, aiming to enhance publication culture and reduce bias.

The Null Hypothesis Initiative was my central concept, creating dedicated space for well-performed negative, inconclusive or replicative studies to be published in pre-existing journals – rather than in new standalone journals.

I do believe it is important to publish well-performed studies in mainstream journals, regardless of the results. This should really be a standard part of scientist and journal workflow. These journals are regularly circulated to readers who are usually clinicians and scientists in the field and are also the journals where they aspire to publish articles. This then adds to awareness of the issue in question and drives further paper submissions in the target areas.”

How has the initiative been received so far, and what are your plans for its future?

“Anecdotally, we have received a lot of positive feedback from the research and transparency community, as well as from authors who have found the Null Hypothesis Initiative really helpful in publishing their work. Neurology and the American Academy of Neurology were our original collaborators for this initiative, which has been very successful. We are proud to be working with the AHA’s journals and with Neurotrauma Reports (and Cohen Veterans Bioscience) to establish Null Hypothesis papers in these fields.

We aim to have the Null Hypothesis Initiative evolve as a grass-roots scientist-driven movement across as many fields as needed, and we’re looking at fields and funding sources to help achieve this. Ideally, I’d like to see all of the major funders and publishing houses come onboard to sponsor and support this initiative.”

I’d like to see all of the major funders and publishing houses come onboard to sponsor and support this initiative.

Null Hypothesis articles are made freely available. How important is the open science movement for improving transparency in research?

“We fundraise and collaborate with publishers and funders to support free-to-read and open access publications, so that the information is immediately available to clinicians and researchers. I believe that the open science movement has great potential to enhance transparency in biomedical and clinical research through available frameworks for recording and sharing protocols, data and results. However, researchers need to be supported to use these tools through grant structures and academic benchmarks to reflect their contributions.”

Alongside the Null Hypothesis Initiative, the CBMRT hosts the Biomedical Transparency Summit (BMTS) series. This year, the series will be held as a free virtual event for the first time. What can attendees look forward to?

“The BMTS is one of the real highlights of our year! The previous format was a full day Summit with fabulous speakers, important updates, networking and discussion among stakeholders in transparency, held in the US and EU. This year, we have shaken things up and are holding our Summit series online as three separate one-hour webinar sessions over February and March.

Again, we’ll be focusing on the latest developments around the world in the biomedical research transparency space. Just like our in-person Summits, participants will be updated by engaging experts who are leading transparency efforts in the policy, industry, technology, academia, publishing and funding domains. We’re delighted yet again by the quality (and enthusiasm) of our amazing speakers. Our goal is for the webinars to be as interactive and stimulating as possible so presentations will be deliberately brief (but content rich) to allow ample time for productive discussions amongst participants and speakers.

Here are the details:

  • Webinar 1: Research integrity – developments across the Atlantic [note: read our summary of the webinar here]. Keynote speakers: Prof Ana Marušić (Standard Operating Procedures for Research Integrity [SoPs4RI] Project Leader) and Michael Lauer MD (Deputy Director for Extramural Research at the National Institutes of Health [NIH])
  • Webinar 2: Open access – developments across the globe [note: read our summary of the webinar here]. Keynote speakers: Prof Johan Rooryck (Executive Director, cOAlition S) and Dr Ginny Barbour (Executive Director, Australasian Open Access Strategy Group)
  • Webinar 3: Acceleration of research and implications for research transparency [note: read our summary of the webinar here]. Keynote speakers include: Prof Ida Sim (Director, University of California San Francisco [UCSF] Informatics and Research Innovation), Dr John Inglis (Executive Director, Cold Spring Harbor Laboratory Press), Deborah Dixon (Global Editorial Director, Oxford University Press)

Registration is free. As we are a non-profit organisation, donations are always very much appreciated. Full details, including speaker bios, can be found in the Summit series outline and on the flier, and you can register here.”

Finally, looking ahead, what more can be done to support publication of high-quality research, regardless of the outcome, and what are your top tips for researchers with negative or null results?

“Let’s stop and think why ‘negative’ results of well-performed studies attract negative connotations. We need to reframe the thinking and perception here – they are valid results of well-performed research.

Let’s stop and think why ‘negative’ results of well-performed studies attract negative connotations. We need to reframe the thinking and perception here – they are valid results of well-performed research.

If one strategy has not solved the problem it is important to publish that finding so that research waste is reduced, and other methods or approaches can be considered. The clinical research problem itself isn’t solved by a null finding but it may well lead towards something that does work.

To support publication of high-quality research regardless of the outcome, we need to address cultural and structural barriers, make space in journals and celebrate a culture of transparency as something that enables science to progress more efficiently.

My top tips for negative or null findings:

  • Write them up and submit them.
  • Point out their quality and value in your cover letter. We all need to create positive press for ‘negative’ data. It’s your hard work, and acknowledges contributions of study participants, collaborators and funders.
  • If you don’t think that a full-length paper is appropriate, look at submitting a preprint, sharing the data or using open science tools to share your work.
  • If you think your field is in need of a Null Hypothesis partnership, please get in touch!”

Associate Professor Sandra Petty is an academic and clinical neurologist at the University of Melbourne Medical School, St Vincent’s Hospital and Alfred Health in Melbourne. She is founder and CEO of the CBMRT. You can get in touch with her at sandy@cbmrt.org and follow the CBMRT on Twitter @CBMRT_org.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2021/02/24/overcoming-barriers-to-reporting-negative-data-perspectives-from-the-center-for-biomedical-research-transparencys-founder/feed/ 0 8151
Reporting of many clinical trials ruled illegal https://thepublicationplan.com/2020/05/12/reporting-of-many-clinical-trials-ruled-illegal/ https://thepublicationplan.com/2020/05/12/reporting-of-many-clinical-trials-ruled-illegal/#respond Tue, 12 May 2020 13:10:15 +0000 https://thepublicationplan.com/?p=6748 judge's gavel on wooden table

A New York judge has ruled that hundreds of the clinical trials registered on ClinicalTrials.gov are in breach of the law due to unreported results. As noted on the AllTrials website, the case was brought against the US Department of Health and Human Services and concerns trials of unapproved drugs that were registered on ClinicalTrials.gov between 2007 and 2017, which failed to report results. Researchers and institutions have often interpreted the requirement to disclose clinical trial outcomes, as laid down in the 2007 FDA Amendments Act, as only applying to approved drugs.

The ruling establishes that reporting requirements apply regardless of approval status, including for studies registered prior to 2017 when the Amendments Act was clarified and expanded to cover a broader range of studies.

Organisations must now report missing results or risk being in breach of the law.

This may present a particular challenge for academic institutions and researchers, as compliance with reporting requirements by these organisations has been found to be significantly lower compared with industry. A 2019 study reported mean disclosure rates of 74% for trials sponsored by pharmaceutical companies compared with 46% for those with non-industry sponsors.

While the ruling may have clarified that applying reporting requirements to approved drugs only is a misinterpretation of the law, much uncertainty remains. As yet, it is not clear whether the ruling will be appealed and the FDA has not indicated what action it may take. Even prior to the ruling, many organisations were found to be falling short in reporting clinical trial results and the effectiveness and transparency of enforcement has been questioned. What is not in doubt is the time and resources required to report missing data from a decade’s worth of clinical trials. We wait with interest to see just how many of these undisclosed results will be made available and whether this ruling results in fewer future breaches of the law.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Ian Faulkner PhD from Aspire Scientific

——————————————————–

With thanks to our sponsors, Aspire Scientific Ltd and NetworkPharma Ltd


]]>
https://thepublicationplan.com/2020/05/12/reporting-of-many-clinical-trials-ruled-illegal/feed/ 0 6748
Is preregistering scientific research becoming mainstream? https://thepublicationplan.com/2020/03/17/is-preregistering-scientific-research-becoming-mainstream/ https://thepublicationplan.com/2020/03/17/is-preregistering-scientific-research-becoming-mainstream/#respond Tue, 17 Mar 2020 12:51:31 +0000 https://thepublicationplan.com/?p=6505 Woman's Hand Placing Last Alphabet Of Word Trust

In an effort to enhance transparency and reproducibility, two-stage formats allowing preregistration of study protocols, followed by publication of the results, have increasingly become available. As clinical trial preregistration has become the norm, over 200 journals now offer Registered Reports, and it seems that more areas of scientific research are following suit.

The multidisciplinary journal PLOS ONE has recently introduced Registered Reports, a new article type that allows researchers to submit their proposed protocol before beginning their research, with the promise of publication once the results are available. Under the PLOS ONE system:

  • Authors submit a Registered Report Protocol describing the rationale, methodology and any ethical approvals required for the proposed study.
  • The protocol is peer-reviewed, to ensure scientific rigor and that the planned research meets PLOS ONE’s criteria.
  • Authors conduct their work in the knowledge that their findings will be submitted and peer reviewed for publication as a linked Registered Report Research Article.

PLOS ONE believe that the Registered Report format will ensure credibility in research communication and assessment.

Potential benefits of Registered Reports include:

  • Combatting publication bias: peer review at the end of research puts emphasis on the results, whereas the Registered Reports format allows outcome-neutral assessment solely based on research quality.
  • Improving study robustness: by enabling researchers to get feedback before conducting their experiments, the best possible study design can be developed collaboratively, and adherence to the initial protocol can later be assessed.
  • Streamlining publication: the target journal for the study results is assigned up-front, so authors don’t need to spend time approaching different journals while the data are ageing.

In addition, researchers will gain an extra peer-reviewed publication. Reviewers may also benefit if authors take up the open peer review option, publishing each article’s peer review history. With PLOS ONE reporting that researchers are eager for such formats to be made available, we look forward to hearing about the uptake of their Registered Reports initiative in this era of open science.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Robyn Foster PhD from Aspire Scientific

——————————————————–

With thanks to our sponsors, Aspire Scientific Ltd and NetworkPharma Ltd


]]>
https://thepublicationplan.com/2020/03/17/is-preregistering-scientific-research-becoming-mainstream/feed/ 0 6505
Negative results can have a positive impact, but only if they are available! https://thepublicationplan.com/2020/02/13/negative-results-can-have-a-positive-impact-but-only-if-they-are-available/ https://thepublicationplan.com/2020/02/13/negative-results-can-have-a-positive-impact-but-only-if-they-are-available/#respond Thu, 13 Feb 2020 11:26:28 +0000 https://thepublicationplan.com/?p=6405 Red Paper Arrow Amidst Blocks On Wooden Table

Both positive and negative results are fundamental to overall understanding and scientific advancement. Yet despite their vital role, negative or unexpected results may be overlooked, resulting in the so-called publication bias for positive data.

The limited availability of negative results has helped to fuel the ‘reproducibility crisis’, with damaging practical and economic consequences, as explained by Simon Nimpf and Dr David Keays in a recent article in EMBO Reports. The authors state that up to 89% of published studies cannot be replicated and highlight the difficulty in getting a refutation accepted. However, while researchers and publishers may be reluctant to publish negative findings, it is their responsibility to recognise the significance of such data and to make them available.

While researchers and publishers may be reluctant to publish negative findings, it is their responsibility to recognise the significance of such data and to make them available.

Nimpf and Keays offer advice on how to navigate the publication of negative results, suggesting approaching the journal that published the original findings as a first step for refutations. While the authors note that some journals dedicated to negative results have had limited uptake by researchers, Cambridge University Press launched the open access journal Experimental Results in 2019. The journal aims to publish articles describing the validation and reproducibility of existing findings, null results, supplementary findings, and improvements or amendments to published results: such approaches may help to redress the balance in the publication of negative and positive data.

We must recognise the significant impact that publication bias can have on patients. This was highlighted recently when The European Association for Cardio-thoracic Surgery (EACTS) questioned the safety of certain European clinical guidelines for coronary artery disease. Unpublished data from one of the underlying studies suggested that the original conclusions may have been misleading, prompting EACTS to encourage its members to ignore the recommendations, at least for now.

Ultimately, Nimpf and Keays urge the scientific community as a whole to recognise the vital role of negative data: we must honour our responsibility to make these findings public.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Jo Chapman PhD from Aspire Scientific

——————————————————–

With thanks to our sponsors, Aspire Scientific Ltd and NetworkPharma Ltd


]]>
https://thepublicationplan.com/2020/02/13/negative-results-can-have-a-positive-impact-but-only-if-they-are-available/feed/ 0 6405
Disparities claimed between stance and action on clinical trial outcome reporting by top medical journals https://thepublicationplan.com/2019/03/28/disparities-claimed-between-stance-and-action-on-clinical-trial-outcome-reporting-by-top-medical-journals/ https://thepublicationplan.com/2019/03/28/disparities-claimed-between-stance-and-action-on-clinical-trial-outcome-reporting-by-top-medical-journals/#respond Thu, 28 Mar 2019 09:23:39 +0000 https://thepublicationplan.com/?p=5688 Legs in multi-colored running shoes.

Guidance from the EQUATOR network and the ICMJE stipulates that clinical trials must be publicly registered at inception in order for results to be considered for publication. This practice was intended to reduce selective reporting of trial outcomes; however, discrepancies between pre-specified and published outcomes of clinical trials persist. A recent article published in Trials by Dr Ben Goldacre (AllTrials-founder) and colleagues reported high levels of outcome misreporting in journals listed as endorsing the CONSORT statement on the correct reporting of clinical trials. What’s more, the group reported that most correction letters they sent to address such misreporting were rejected.

This prospective study (COMPare) tracked outcome switching in 67 randomised controlled trials (RCTs) published over a six-week period (19 Oct–30 Nov 2015) in the five top CONSORT-endorsing medical journals (The New Engl J Med, The Lancet, JAMA, The BMJ and Annals of Internal Medicine). Overall, outcome reporting was deemed to be poor; 58/67 (87%) manuscripts analysed contained CONSORT-breaching discrepancies judged to require a correction letter. Of these letters, only 40% were published, with a median delay of 99 days. Qualitative analysis was thought to suggest misunderstanding among journal editors surrounding correct outcome reporting and CONSORT, a finding that may explain why some journals did not cooperate when presented with evidence of misreporting.

The authors warn that readers of a journal with a CONSORT-supporting stance may assume that trial results are reported in line with pre-specified outcomes. To ensure this is the case, Goldacre and his colleagues suggest a number of strategies: 1) Changes to journals’ correspondence processes; 2) Indexed post-publication peer review; 3) Changes to CONSORT’s enforcement mechanisms; 4) Changes to practices in methodology research to increase sharing of misreporting with the broader academic community.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Emma Prest PhD from Aspire Scientific

——————————————————–

With thanks to our sponsors, Aspire Scientific Ltd and NetworkPharma Ltd


]]>
https://thepublicationplan.com/2019/03/28/disparities-claimed-between-stance-and-action-on-clinical-trial-outcome-reporting-by-top-medical-journals/feed/ 0 5688