Reproducibility – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Thu, 31 Oct 2024 14:41:58 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Reproducibility – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Uncovering scientific ERRORs: can financial rewards work? https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/ https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/#respond Thu, 31 Oct 2024 14:41:54 +0000 https://thepublicationplan.com/?p=16715

KEY TAKEAWAYS

  • The ERROR project pays reviewers to search for mistakes in the scientific literature, while rewarding authors who agree to participate.
  • Reviewers and authors receive bonuses depending on the extent of errors found.

Amid rising retraction rates, the scientific record is increasingly scrutinised for signs of research misconduct like fabrication and image manipulation. But what about detecting errors in the data underlying scientific publications?

The ERROR project

Modelled on tech company ‘bug bounty’ programmes, the Estimating the Reliability & Robustness of Research (ERROR) project offers cash rewards for reviewers identifying incorrect or misinterpreted data, code, statistical analyses, or citations in scientific papers. Following ERROR’s launch earlier this year, Julian Nowogrodzki reviewed the project so far in a recent article in Nature.

Professor Malte Elson and colleagues are aiming to produce a blueprint for systematic error detection that will be scalable and transferable across scientific fields. Starting with highly cited psychology papers, the first review was posted in August. ERROR plans to cover 100 publications over 4 years, expanding into artificial intelligence, medical research, and potentially preprints.

“The ERROR project offers cash rewards for reviewers identifying incorrect or misinterpreted data, code, statistical analyses, or citations in scientific papers.”

Financial incentives

The project has 250,000 Swiss francs (~£220,000) of funding from Professor Elson’s institution, the University of Bern. Reviewers can earn up to 1,000 Swiss francs each time, plus a variable bonus of up to 2,500 Swiss francs depending on the scale of errors identified. Authors receive up to 500 Swiss francs: 250 for agreeing to participate and sharing data, plus a bonus if minimal errors are found.

A challenging path

Despite the incentives, ERROR has hurdles to overcome:

  • Author buy-in: So far, authors from just 17 of 134 selected papers have agreed to participate.
  • Data access: Underlying data may have been lost or authors may cite legal reasons barring sharing.
  • Reviewer expertise: There are limited potential reviewers with sufficient technical expertise yet no conflicts of interest. Dynamics linked to seniority may also prevent some prospective reviewers taking part.

The ERROR team hopes to convince research funders to allocate money for error detection – ultimately saving them from investing in flawed research. We look forward to seeing how this project helps move the needle towards a more reproducible scientific record.

————————————————–

Do you think current ‘ad hoc’ approaches to error detection in the scientific record are sufficient?

]]>
https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/feed/ 0 16715
Retractions as corrections: shifting the narrative https://thepublicationplan.com/2024/09/25/retractions-as-corrections-shifting-the-narrative/ https://thepublicationplan.com/2024/09/25/retractions-as-corrections-shifting-the-narrative/#respond Wed, 25 Sep 2024 12:34:46 +0000 https://thepublicationplan.com/?p=16501

KEY TAKEAWAYS

  • Retractions should be seen as neutral corrections made to preserve the integrity of academic work, rather than punitive actions.
  • Consistent communication and transparency throughout the retraction process are key to maintaining trust within academic publishing.

Retractions in academic publishing have long been viewed as a mark of shame, often associated with misconduct. However, this perception can in itself be detrimental to the integrity of the scientific record. As Tim Kersjes argues in an LSE Impact Blog, in order for research to be self-correcting it might be time to shift the narrative and start to view retractions as ‘neutral tools’.

Remove the stigma

Kersjes outlines how the stigmatisation of retractions deters authors from retracting their work, even when errors are discovered. Viewing retractions as a routine part of the scientific process could encourage more authors and editors to retract flawed work, ensuring that the published record remains reliable. While past suggestions have included systems that categorise retractions based on the reasons behind them, Karsjes cautions against this, questioning whether these approaches really remove stigma or have the unintended consequence of increasing it.

Standardise reporting

Meanwhile, The Scholarly Kitchen reported on relevant new guidance by the National Information Standards Organisation (NISO). The Communication of Retractions, Removals, and Expressions of Concern (CREC) Recommended Practice emphasises consistency and transparency in the way that retractions are communicated, rather than focusing on the reason for retraction. In particular, it recommends:

  • consistent terminology, including naming protocols
  • retraction status to be clearly indicated in the title of the article
  • use of watermarks and labels on landing pages
  • clear responsibilities regarding handling of associated metadata.

The way forward

It is crucial for all stakeholders—authors, editors, and publishers—to embrace retractions as correction tools and for retractions to be communicated clearly and consistently. In doing so, we can foster a culture whereby the integrity of published research is prioritised above all else.

————————————————–

Do you believe that retractions should be treated as neutral corrections in academic publishing?

]]>
https://thepublicationplan.com/2024/09/25/retractions-as-corrections-shifting-the-narrative/feed/ 0 16501
Building trust: ACCORD guidelines for reporting consensus methods https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/ https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/#respond Tue, 09 Jul 2024 10:48:23 +0000 https://thepublicationplan.com/?p=16181

KEY TAKEAWAY

  • The ACCORD reporting guidelines comprise a 35-item checklist that aims to improve the transparency of reporting on consensus methods.

The COVID-19 pandemic highlighted the need for effective knowledge-sharing to guide healthcare decisions. In rapidly evolving situations, reaching consensus among experts from diverse backgrounds is crucial, especially when evidence is emergent or inconsistent. This process is best achieved using formal consensus methods.

Despite their critical role in healthcare and policy decision-making, consensus methods are often inadequately reported, leading to inconsistencies and lack of transparency. To address these issues, the ACcurate COnsensus Reporting Document (ACCORD) project was established to develop comprehensive guidelines for reporting the numerous consensus methods used in medical research.

The ACCORD reporting guidelines aim to enhance trust in the recommendations made by consensus panels, benefiting authors, journal editors, reviewers, and, ultimately, patients through more reliable healthcare recommendations.

The ACCORD checklist was formulated using the EQUATOR Network’s methodology for developing reporting guidelines, with the full study protocol published in Research Integrity and Peer Review. The project began with a systematic review, followed by 3 rounds of the Delphi process and several steering committee meetings. To ensure a comprehensive perspective, a diverse panel was engaged, comprising 72 participants from 6 continents and various professional backgrounds, including clinical, research, policy, and patient advocacy. Through this rigorous process, a preliminary checklist was refined to a final list of 35 essential items covering all sections of a manuscript.

The ACCORD reporting guidelines aim to enhance trust in recommendations made by consensus panels, benefiting authors, journal editors, reviewers, and ultimately patients through more reliable healthcare recommendations.

————————————————–

What do you think – will the ACCORD guidelines improve the transparency of reporting on consensus methods?

]]>
https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/feed/ 0 16181
How does failure to falsify influence the reliability of scientific research? https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/ https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/#respond Fri, 23 Jun 2023 14:52:38 +0000 https://thepublicationplan.com/?p=14078

KEY TAKEAWAYS

  • Failure to test and refute prominent hypotheses reduces confidence in the reliability of scientific results and hinders scientific progress.

Across many scientific fields there is a well-documented reproducibility crisis that is damaging trust in the reliability of research data. In a recent article published in eLife, Dr Sarah Rajtmajer and co-authors discuss how failure to falsify (refute) strong hypotheses through direct testing has contributed to the problem.

As a case study, the authors highlight two prominent and seemingly contradictory hypotheses in the field of connectomics:

  • Hyperconnectivity hypothesis: brain injury results in an enhanced functional network response.
  • Disconnection hypothesis: brain injury results in reduced functional connectivity.

Instead of deliberate attempts to challenge either of these positions, the research area has seen the publication of a large number of small studies examining under-specified hypotheses, which has done little to bring clarity to the existing body of literature. The authors argue that the ‘science-by-volume’ culture, coupled with the overuse of inappropriate statistical tests and lack of falsification attempts, fosters a research environment in which the quantity of scientific findings continues to grow, but the depth of understanding remains stagnant.

The article calls out the big data revolution as a factor adding to these concerns. The ability to analyse large datasets in different ways can produce false or coincidental correlations, particularly if the statistical methodologies used are not robust.

The strongest hypotheses are specific, easily testable, and clearly indicate the evidence needed to disprove their predictions.

According to Rajtmajer et al., the strongest hypotheses are specific, easily testable, and clearly indicate the evidence needed to disprove their predictions. The authors suggest embracing a ‘team science’ approach, where groups of scientists work together to form opposing hypotheses, design experiments to test them, and agree on the outcomes that would support or refute them.

Implementing a falsification approach, whereby every observation confirms or refutes a hypothesis, would be challenging in everyday research practice. However, the authors believe that regular attempts to falsify a hypothesis could guide the direction of scientific research and enhance the reliability of published science, particularly if combined with other processes aimed at improving data transparency.

Regular attempts to falsify a hypothesis could guide the direction of scientific research and enhance the reliability of published science.

—————————————————–

Could placing a greater emphasis on hypothesis testing and falsification help solve the reproducibility crisis in scientific research?

]]>
https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/feed/ 0 14078
Are your AI-generated data reproducible? https://thepublicationplan.com/2023/02/16/are-your-ai-generated-data-reproducible/ https://thepublicationplan.com/2023/02/16/are-your-ai-generated-data-reproducible/#respond Thu, 16 Feb 2023 16:47:51 +0000 https://thepublicationplan.com/?p=13239

KEY TAKEAWAYS

  • Researchers have raised concerns about the reproducibility of AI-based studies across many scientific fields including medicine.
  • Reporting checklists and guidelines could help to avoid common pitfalls in studies using AI but more needs to be done to ensure scientific credibility.

The results of many studies that use machine learning or artificial intelligence (AI) methodologies could be being overstated. This warning of a reproducibility crisis in machine learning was recently reported by Elizabeth Gibney in Nature, and is based in part on the findings of a preprint co-authored by Sayash Kapoor and Arvind Narayanan, who identified a collective 329 studies across multiple scientific disciplines with shortcomings regarding the reproducibility of their findings.

Machine learning and AI have become powerful tools at the disposal of biomedical researchers, but the reproducibility of these methodologies and associated outcomes is paramount for their credibility. A methodologic pitfall frequently encountered by Kapoor and Narayanan in their analysis was so called ‘data leakage’, where data used to train the AI model were subsequently used in the test data set, potentially exaggerating the AI’s ability to make accurate predictions. To counter this, Kapoor and Narayanan propose that researchers use ‘model info sheets’ to transparently report the details of their AI models.

“Unless we do something like this, each field will continue to find these [reproducibility] problems over and over again.”

Reporting checklists are not unfamiliar to AI researchers in the biomedical field, as noted by Gibney, who referred to initiatives like the EQUATOR Network’s CONSORT-AI and SPIRIT-AI reporting guidelines developed by Dr Xiao Liu and colleagues. While checklists are an important and useful tool, greater collaboration between researchers and specialists in machine learning could also help. It is encouraging then to note the 1,200 people registering to attend a workshop on reproducibility co-organised by Kapoor with the mission to resolve the reproducibility crisis in AI-based science.

—————————————————–

What do you think – do AI and machine learning models need greater scientific scrutiny?

]]>
https://thepublicationplan.com/2023/02/16/are-your-ai-generated-data-reproducible/feed/ 0 13239
Are research data FAIR enough? https://thepublicationplan.com/2023/02/03/are-research-data-fair-enough/ https://thepublicationplan.com/2023/02/03/are-research-data-fair-enough/#respond Fri, 03 Feb 2023 11:56:03 +0000 https://thepublicationplan.com/?p=13088

KEY TAKEAWAYS

  • Current mandates for responsible data sharing aim to make data findable, accessible, interoperable, and reusable (FAIR), but are not always effective.
  • More action may be needed to develop metadata standards that ensure research data are truly FAIR.

Data-sharing mandates aim to make research outputs more accessible to allow verification of results and further analyses.

Horizon Europe, the European Union’s programme for research and innovation funding, mandates that almost all data must be FAIR:

  • Findable
  • Accessible
  • Interoperable
  • Reusable.

Further, in August 2022, the US government announced a policy that federal-funded research articles and most underlying data should be made freely available, to be implemented by 2025.

Despite these efforts, Professor Mark A. Musen has shared concerns about the ad hoc nature of metadata and the difficulties in finding online data sets, arguing that few data sets are actually FAIR. He describes how current metadata often contain only administrative and organisational information without any useful experiment-specific descriptors, forcing researchers to search records manually, a time-consuming and often futile task.

“The research community must commit to creating discipline-specific standards for metadata and to applying them throughout the scientific enterprise.” – Professor Mark A. Musen, Professor of Medicine (Biomedical Informatics) and Biomedical Data Science, Stanford University, California

Prof. Musen highlights potential solutions to optimise data sharing but acknowledges that these come with certain difficulties. For example:

  • Referencing data sets in published manuscripts: allows inclusion of experimental details, although data may not be deposited in a form that is easy to understand; additionally, not all manuscripts are accepted for publication.
  • Technology: the CEDAR Workbench is a tool that automatically generates metadata forms to help describe particular types of biomedical experiments in a standardised way; however, the tool is only useful in scientific fields with at least basic metadata standards – something Prof. Musen believes is lacking.
  • Dedicated workshops: ZonMw, a funding agency in the Netherlands, hosts workshops to develop FAIR metadata standards for its grant recipients to use; however, these workshops are costly (approximately €40,000 for the development of a single standard).

Prof. Musen concludes that FAIR data will need a huge investment and development of standards that go much further than simple mandates.

—————————————————–

Do you think current mandates are enough to ensure research data are FAIR?

]]>
https://thepublicationplan.com/2023/02/03/are-research-data-fair-enough/feed/ 0 13088
GRReaT expectations: are editable templates the future of manuscript writing? https://thepublicationplan.com/2022/07/28/grreat-expectations-are-editable-templates-the-future-of-manuscript-writing/ https://thepublicationplan.com/2022/07/28/grreat-expectations-are-editable-templates-the-future-of-manuscript-writing/#comments Thu, 28 Jul 2022 10:18:41 +0000 https://thepublicationplan.com/?p=12006

KEY TAKEAWAYS

  • Authors may soon be able to use the GoodReports.org website to generate a manuscript template incorporating the recommended reportable items from the most appropriate set of EQUATOR reporting guidelines.
  • The GRReaT trial will evaluate whether the templates improve research reporting compared to the use of reporting checklists alone.

The GoodReports.org website, which helps researchers choose the most appropriate reporting guidelines for their study, may soon be able to provide authors with an editable article template to help implement good reporting practices earlier on in the manuscript writing process. The GoodReports Randomized Trial (GRReaT) is looking for authors to help test if the article templates result in more completely reported articles than simply signposting authors to a checklist of reportable items.

It is estimated that only 20% of medical research contributes to the advancement of knowledge, with poor reporting a recognised factor in this alarming statistic.

A recent BMC Series blog by EQUATOR (Enhancing the Quality and Transparency of Health Research) Network team member Caroline Struthers described how a series of Lancet articles on the topic of research waste inspired the initial development of the EQUATOR-headed GoodReports.org website. Since its launch in 2018, user feedback showed that authors thought they would receive greater benefit from the tool if guidance was given earlier in the writing process and in a more implementable, non-checklist format, prompting the evaluation of editable manuscript templates.

The Medical Research Council-funded GRReaT trial is looking for medical researchers working on the following types of health-related studies to contribute:

  • cohort studies
  • case-control studies
  • cross-sectional studies
  • observational studies in nutrition or dietetics
  • systematic reviews (of healthcare interventions)
  • randomised trials of pharmaceuticals, medical devices, procedures, or social or psychological interventions.

The results of the GRReaT trial will provide an important insight into how comprehensive reporting in medical research can be better supported. In exchange for their contributions, trial participants will receive a manuscript ‘completeness’ report from experts at the UK EQUATOR centre with tips on how to improve their manuscript prior to peer review.

—————————————————–

What do you think – will the GoodReports article templates help improve reporting in medical research?

]]>
https://thepublicationplan.com/2022/07/28/grreat-expectations-are-editable-templates-the-future-of-manuscript-writing/feed/ 2 12006
Should open science focus more on open methods? https://thepublicationplan.com/2021/07/20/should-open-science-focus-more-on-open-methods/ https://thepublicationplan.com/2021/07/20/should-open-science-focus-more-on-open-methods/#respond Tue, 20 Jul 2021 08:14:41 +0000 https://thepublicationplan.com/?p=9486 Open science is now recognised as a key driver in improving the quality of scientific research, with widespread support amongst researchers and global organisations such as the United Nations Educational, Scientific and Cultural Organization (UNESCO). Many open science initiatives focus on the availability of research datasets ie, the concept of open data, but a recent article calls for an open and transparent approach to reporting research methods too.

In his article, Dr David Crotty explains that with the traditional format of journal publications, many valuable methodological details of a research project are cut due to space limits. Although it would be ideal to capture the whole research workflow, researchers are time limited, so creating a detailed public record of all their daily activities is unrealistic. However, as the next step to open data, Dr Crotty says that access to research methodologies would allow validation of the quality and accuracy of published data. He argues that open methods have at least as much potential for re-use as open data due to their broader applicability, with methods papers being amongst the most cited article types. Indeed, many recent Nobel Prizes have been given to researchers who created scientific approaches that have subsequently been widely applied by other scientists.

Open methods have at least as much potential for re-use as open data due to their broader applicability, with methods papers being amongst the most cited article types.

Open methods would require the public availability of detailed documentation of the procedures used to gather and analyse data, with options including:

  • publication of a standalone methods paper
  • better use of supplementary materials to document methods in detail
  • deposited documentation in a repository such as protocols.io.

Following the model of the open data movement, input from a wide variety of stakeholders will be needed to implement open methods, with similar standards (such as the FAIR Guiding Principles) and suitable repositories. Their value will need to be reflected by funders and institutions to encourage the time investment by researchers. Publishers will also play a key role in normalising the open methods paradigm for authors, with some publishers already having created specific journals and repositories to facilitate better methods reporting.

In the words of Dr Crotty:

“Now is the time to move this forward. Put simply, transparency around research methodologies is essential for driving public trust and accurate, reproducible research results.”

—————————————————–

What do you think – are open methods as important as open data?

—————————————————–This image has an empty alt attribute; its file name is Byline_Robyn-Foster.png


]]>
https://thepublicationplan.com/2021/07/20/should-open-science-focus-more-on-open-methods/feed/ 0 9486
Research integrity in the COVID-19 era: insights from Retraction Watch co-founder Ivan Oransky https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/ https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/#respond Wed, 17 Mar 2021 11:32:43 +0000 https://thepublicationplan.com/?p=8283

Ivan Oransky has been at the forefront of efforts to highlight research integrity issues for over a decade, co-founding Retraction Watch in 2010 to track and publicise retractions in the scientific literature. Following his presentation at the 2020 European Medical Writers Association (EMWA) symposium, we spoke to him about retractions during the COVID-19 pandemic and steps he believes should be taken to tackle research integrity challenges in the future.

First of all, COVID-19 is having a huge, ongoing impact on our daily lives and on scientific research – reflected in the huge number of COVID-19-related publications. At the same time, Retraction Watch’s list of retracted COVID-19 papers continues to grow. Which of the COVID-19-related retractions to date do you think have been the most notable, and what do these cases tell us about current practice in scientific publishing?

“I don’t know that I would choose any particular COVID-19-related retraction as most notable – I suppose that’s like asking which of your children is your favourite. There are certainly the ones that gained the most attention – if I had to pick one, it would be the Lancet paper about hydroxychloroquine that was based on a very questionable (at best) dataset from a company called Surgisphere. I think that paper captured the most attention, and close behind it was a New England Journal of Medicine (NEJM) paper that was also based on those alleged data, but wasn’t about hydroxychloroquine so didn’t capture quite so many eyeballs. Those are the retractions where I think a lot of people had a Casablanca “shocked, shocked!” moment, with the idea that, somehow, this was completely different from anything that’s ever happened in science before. And that’s just nonsense – complete revisionist history.

I think it’s more important, or useful in a way, to look at the whole pattern. I wouldn’t call these data so much, but there have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.

There have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.

However, 10 of these retractions were because Elsevier published manuscripts twice that authors had only submitted once. What that speaks to is the rush, or the fast pace, of publishing in the COVID-19 era. The fast pace isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years – although I would argue that there have been some strides in that direction.

The fast pace of publishing in the COVID-19 era…isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years.

To me, it’s not a particular retraction that’s important – rather the phenomenon that everyone’s rushing and there’s a lot of sloppiness. If anything, I’d say that the proportion of retractions due to misconduct is much lower than you might see in a typical dataset of retractions. I don’t know what to make of that yet, and it could be that people just haven’t found the cases of misconduct so far, but I think that that’s worth paying attention to. It really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.”

The proportion of retractions due to misconduct is much lower than you might see in a typical dataset…it really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.

While journals have acted quickly to retract some COVID-19-related publications, in general, the pace of investigation and retraction is very slow. However, you’ve recently highlighted a “double-standard” involving rapid retraction when papers draw negative attention on social media. How should journals prioritise their investigations to address allegations in a timely way?

“Well, I think that what journals and publishers should do is actually prioritise investigations. Although some argue that the problem is certain papers being retracted before other papers, the problem is that not enough papers are being retracted, full-stop. There are countless papers being flagged – whether that’s on PubPeer, through correspondence with journals or by scientific sleuths like Elisabeth Bik – where journals are doing nothing. Maybe they’re investigating the cases and it’s just taking them a long time – but why is it taking them so long?

One positive development over the past few years is that some journals are actually hiring entire staffs to look at allegations and to try to catch issues that might lead to retraction before articles are published. Those are the journals and publishers that I think everyone should emulate, such as the Journal of Biological Chemistry, PLOS ONE and FEBS PRESS.

Some journals are actually hiring entire staffs to look at allegations and to try to catch issues…before articles are published. Those are the journals and publishers that I think everyone should emulate.

So, to me, the issue is not so much whether we should retract some papers before others. The more important question is ‘why are journals not prioritising investigations, full-stop?’ If there has to be some prioritisation, then we should retract papers with fatal flaws that seem to be doing harm, or have the potential for doing harm, first. The problem is that then nobody will do anything about all of the other papers. I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade. If you give journals and publishers an excuse, or a rationalisation for why they’re not getting to something they should be getting to, you’re creating more of an issue, and journals know that.”

I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade.

Recently, Retraction Watch discussed a Scientific Reports article retracted following a post-publication peer review round requested by the Editor. Are changes to peer review processes needed to avoid this kind of retraction? Do you think increasing adoption of post-publication and open peer review processes will impact retraction rates?

“I think whether changes are needed to peer review processes depends on what your goal is. Is your goal to prevent retractions, or is it to actually have a transparent publication process that reflects how science works instead of having papers be the be all and end all in terms of promotions, tenure, and so on? I think you have to decide what your goals are, and once you’ve decided this, you can create a system that makes sense.

Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t.

Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t. In my opinion, journals have never done a good job of answering this. I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the editors were really forced to admit that their peer review systems were not well-equipped for those papers, although the journals approached this in different ways. These lessons are a good thing, but it’s not as if these issues with peer review only happen when there’s a retraction that catches everyone’s attention.

I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the Editors were really forced to admit that their peer review systems were not well-equipped for those papers.

The paper in Scientific Reports caught everyone’s attention because of what it’s about and the conclusions [the paper made links between obesity and dishonesty], but papers are slipping through like this all the time. Journals need to acknowledge this and provide their peer review reports. I do think that, even if it’s anonymised, publishing peer review comments is a good idea so you can have some faith in the process, see what happened, and believe what happened. I’m not sure that there’s an alternative to journals acknowledging the limitations of peer review processes – I think that they just have to be honest. At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true, and these cases are not unique.”

At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true.

Retractions can occur for any number of reasons, but retraction notices (if they appear at all) can be vague about the underlying cause. How should a retraction ‘ideally’ be conveyed? Is a nomenclature needed, particularly to help protect authors when the retraction is due to honest error?

“Over the years, I’ve actually grown to be increasingly opposed to a nomenclature for various ‘types’ of retraction. I think that in every case I’ve seen where nomenclature is involved, either journals make category errors or they use nomenclature as weasel words. Elsevier have used ‘withdrawn’ in certain cases (and other publishers have followed suit in some ways), and really this is an excuse or rationale not to include any information about why the paper was withdrawn or retracted. That’s a step way backwards. We all make category errors – I make category errors probably every day, but I hope I correct them. For whatever reason, the notion that what we really need is a better taxonomy has persisted – but how that is going to solve the problem of lawyers getting involved in the process and obfuscating reality, or journals not including reliable information in retraction notices, I don’t understand. It won’t help anyone if you still don’t know what actually happened.

What should actually happen – and this is borne out in the economics literature – is that retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear, as sometimes people have muddied the waters. If that’s the case, then say so: ‘we don’t know what’s happened here because lawyers on either side have been bickering for a year about this – but we feel we should tell readers anyway’. That’s a pretty honest way to go, unlike the approach of not saying anything.

Retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear.

For individual researchers, it’s very clear that if you retract a paper for fraud, dishonesty or misconduct, you have a retraction penalty, and your citations decline. Maybe your whole subfield’s citations decline as you bring everyone down with you. When you retract a paper due to honest error and the retraction notice very clearly explains this, you don’t see that decline. One study says you might even see a bump, although that hasn’t been replicated.

So, clarity in retraction notices is what’s needed. I think the notion that we can classify everything with a set of words – that will be argued about forever anyway – is the wrong way to go.”

Even after retraction, papers continue to be cited. Do journals need to do more to publicise retractions, and how can authors make sure they don’t fall into this trap?

“Again, it depends what journals want. Do they want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted? I go by the old adage ‘never ascribe to malice that which is adequately explained by incompetence’, so I’m willing to acknowledge that the lack of action from journals may be due to incompetence rather than being intentional.

Do they [journals] want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted?

There are now countless studies, conducted by librarians and bibliometrics and scientometrics scholars, showing that it can be very difficult to find that an article has been retracted. Journals and publishers are not transmitting the metadata to where they should (whether this is PubMed, Web of Science, etc) and sometimes they transmit the wrong metadata (eg they call something a correction when it’s a retraction). Even on the journal’s own pages or on the PDFs, articles often don’t show up as retracted. Journals should do more, as they’re the ones who end up publishing papers citing retracted work.

Journals should do more, as they’re the ones who end up publishing papers citing retracted work.

So, how can authors make sure they don’t fall into this trap? We created a database that is primarily for tracking retractions and we’re more comprehensive than any database of or containing retractions. At the moment, there are close to 25,000 retractions in our database – that’s almost twice as many as you’ll find in any other similar database. Authors can search for articles one-by-one using our database, if they want, or they can sign up for software suites and bibliographic management software packages that are working with Retraction Watch’s database. If you use Zotero for example, you’ll get an automatic flag every time a paper in your library is retracted. We get notes about this on Twitter all the time from people who didn’t know it existed and find it really helpful – we’re thrilled with that. We’d love the Retraction Watch database to be incorporated into more software packages too. Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work – but these automated processes have become pretty easy to do.”

Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work.

The extent and sophistication of journal targeting by paper mills and scams is ever-increasing. From your perspective, what can be done to tackle this problem and future-proof publishing processes against these attacks?  

“To me, this really takes a two-pronged approach. One prong is to tackle what we know is out there that no-one has seen fit to tackle yet. iThenticate and other software that looks for plagiarism and duplication follow this model: journals and publishers realised there was a lot of plagiarism, someone developed some software, and now everyone uses it. The same could be done with our database of retractions. Right now, we don’t have a good set of software tools that can detect image manipulation or image duplication, for example. We have individuals including Elisabeth Bik who are doing amazing work, but that’s not really scalable and we need a scalable solution. However, these solutions are only looking to fight yesterday’s battles. Meanwhile, the people who came up with these bad practices are coming up with more ‘clever’ approaches and we won’t know what those are until they explode. So, all of this fits into one prong – rooting out problems once we know they exist.

We also need to take a step back and move upstream to what the real issue is, which is the incentive structure. If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals. If you remove that incentive, then nobody’s going to feel a particular need to fake papers, go to a paper mill, or anything else.

If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals.

It’s probably no accident that paper mills tend to be concentrated in places, particularly China, where the incentive structure has been completely warped towards papers for so many years. If we don’t look at these incentive structures, every year or so, another scam will come out.

If we don’t look at these incentive structures, every year or so, another scam will come out.

We wrote about fake peer review back in 2012 – it turns out this hasn’t been eradicated, although it is now easier to detect and has been cut down. We broke a story about selling authorship in Russia, we’ve reported on paper mills – there’s just always something, and there’s always going to be something else. I don’t have the kind of mind to think up what will be next, although I can often find it once it happens thanks to sources like the scientific sleuths. None, or very little, of this will happen if we remove the very pervasive and poisonous incentive structures we have at the moment.”

As noted in the 10 takeaways from 10 years at Retraction Watch, pharma-funded publications account for a low proportion of retractions. You’ve noted that this is unsurprising given the increased scrutiny in pharma versus academia – what changes should academia make to reduce retraction rates? 

“Maybe this is controversial, but I don’t know that we should (certainly in the short or medium term) push to reduce retraction rates. If we mean reduce retraction rates as a proxy for reducing ‘bad behaviour’ – sloppiness or even misconduct – then yes, we should take measures to try to prevent that or to detect it better. There are still a lot of papers that should be retracted but haven’t been, so I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse. I do think that there are lots of steps that academia can take to try to cut down on these bad behaviours – this goes back to incentives, in a large part.

I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse.

On the flipside, I don’t think that we should absolve pharma-funded publications of bad behaviour or misconduct. For those sorts of papers, studies can be set up in such a way to get the desired results, but this is not something that would be considered misconduct or would be a ‘retractable offence’. There are gatekeepers and hoops that studies need to jump through (like Institutional Review Boards), but we shouldn’t assume that those systems are perfect.

Both settings have a lot of work to do – in academia you see behaviours that are ‘retractable offences’ while in pharma, that’s not the case, but research practices can have other negative effects. If universities are interested in lowering the rates of misconduct in their ranks, they need to look inwardly and examine whether they’ve created incentive structures that reward good or bad behaviour.”

Finally, in your opinion, what is the biggest challenge to research integrity right now, and how can this be overcome?

“I’m going to sound like a broken record, but I do think that incentives are my main concern and the thing that needs the most attention. That being said, one of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.

One of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.

You want constructive criticisms and critiques in science – you don’t want them to be ad hominem attacks. The critiques should help move the science and the evidence to a better place. Often, the most critical peer reviews are not necessarily of the papers that are most problematic (or frankly those that shouldn’t have been considered for publication in the first place), but are of papers that disagree with your point of view. I guess there’s a tribalism that cuts in every which way, whether it’s scientific, political, or due to the family tree of where and who you trained with. You end up with a lot of people shouting at each other and ‘creating heat without shedding a lot of light’. In the same way, social media has amplified and exacerbated a lot of issues in terms of politics, world events, conspiracy theories and what have you. Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.

Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.

I’m all for free speech – I think everyone should feel free to speak their mind and I encourage that, even when they disagree with me – but if we don’t figure out how to get away from this tribalism, we’re just going to polarise science even more. If we couple that with all the issues science is facing, whether it’s a real lack of funding, or publish-or-perish incentives, it’s not going to go well.”

Ivan Oransky is Editor in Chief of Spectrum, Distinguished Writer In Residence at New York University’s Carter Journalism Institute, and President of the Association of Health Care Journalists. He is also co-founder of Retraction Watch, which can be followed on Twitter @RetractionWatch. You can contact Ivan at team@retractionwatch.com and follow him on Twitter @ivanoransky.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/feed/ 0 8283
Research integrity across the Atlantic: our summary of the first Biomedical Transparency Summit series webinar https://thepublicationplan.com/2021/03/04/research-integrity-across-the-atlantic-our-summary-of-the-first-biomedical-transparency-summit-series-webinar/ https://thepublicationplan.com/2021/03/04/research-integrity-across-the-atlantic-our-summary-of-the-first-biomedical-transparency-summit-series-webinar/#respond Thu, 04 Mar 2021 10:59:23 +0000 https://thepublicationplan.com/?p=8210

Last week, the Center for Biomedical Research Transparency (CBMRT) hosted the first of three webinars forming this year’s virtual Biomedical Transparency Summit series. The webinar, entitled ‘Research integrity – developments across the Atlantic’, was opened by the CBMRT’s CEO Sandra Petty (recently interviewed by The Publication Plan) and speakers included Professor Ana Marušić (Standard Operating Procedures for Research Integrity [SOPs4RI]) and Dr Michael Lauer (National Institutes of Health [NIH]).

Professor Marušić spoke about the importance of research ethics and integrity, which together contribute to ‘responsible research’. She also shared the ongoing efforts to develop the SOPs4RI toolbox, funded by the European Commission, which aims to assist research-performing and funding organisations to promote research integrity. SOPs4RI have found that few data exist about how institutions can effectively improve research culture, but have also identified many potential actions that can be taken.

While highlighting diverse examples of research misconduct, Dr Lauer discussed the different stakeholders responsible for ensuring research integrity and discouraging misconduct, emphasising that everyone plays a role. He noted that the NIH have previously clarified that institutions receiving funding are responsible for ensuring that their employees (and final funding recipients) adhere to research best practices, such as disclosing conflicts of interest and preventing issues like falsification of data and plagiarism.

Further topics of discussion included:

  • how collegiality impacts research integrity
  • the role of authors and peer reviewers in spotting research misconduct
  • whistleblower protections in research.

The webinar concluded with a panel discussion moderated by Dr Devon Crawford (National Institute of Neurological Disorders and Stroke) and Dr David Tovey (Journal of Clinical Epidemiology).

You can catch up on the webinar in full by viewing the recording or the slides. You can also read our summaries of the second and third webinars in the series.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Kristian Clausen MPH from Aspire Scientific


]]>
https://thepublicationplan.com/2021/03/04/research-integrity-across-the-atlantic-our-summary-of-the-first-biomedical-transparency-summit-series-webinar/feed/ 0 8210