Bias – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Tue, 30 Jul 2024 13:10:50 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Bias – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Overcoming bias in ‘overviews of reviews’: a spotlight on appraisal tools https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/ https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/#respond Tue, 30 Jul 2024 13:10:50 +0000 https://thepublicationplan.com/?p=16233

KEY TAKEAWAYS

  • ‘Overviews of systematic reviews’ are a feature of evidence-based decision making, but are only as strong as the individual reviews they include. Evaluating potential biases and the methodological quality of systematic reviews is therefore crucial.
  • A recent article examines 2 recommended systematic review assessment tools, AMSTAR-2 and ROBIS. While both have value, their use requires proper training, time, and know-how.

Synthesising evidence from multiple systematic reviews (also known as conducting an umbrella review or  ‘overview of reviews’) can form a key part of evidence-based decision making and treatment guidelines. However, conducting effective ‘overviews of reviews’ requires careful planning to minimise bias, which can be present at either a primary study or individual review level. In a recent BMJ Medicine methods primer, Carole Lunny and colleagues address the challenges of assessing and reporting bias in systematic reviews. The group offer a detailed examination of AMSTAR-2 and ROBIS, two recommended appraisal tools, and provide practical guidance for authors of ‘overviews of reviews’.

AMSTAR-2 versus ROBIS

The group compared key features of each tool.

AMSTAR-2:

  • 16-item checklist
  • focuses on the methodological quality of systematic reviews of healthcare interventions, including risk of bias
  • reportedly favoured for its quick and easy-to-use format
  • may be preferred for broad assessment of systematic review quality.

ROBIS:

  • domain-based tool
  • 19 items, aimed at identifying biases in systematic reviews
  • useful for pinpointing concerns in review conduct and assessing relevance
  • requires “more thoughtful assessment and time”
  • may be preferred for more nuanced assessments, or comparisons of risk of bias across multiple types of systematic reviews.

Standardising ‘overviews of reviews’

The authors call for a standardised approach to ‘overviews of reviews’ to enhance their credibility and value.

Regardless of the appraisal tool used, the authors call for a standardised approach to ‘overviews of reviews’ to enhance their credibility and value. They outline several key recommendations:

  • Report methodological quality or bias by item, domain, and overall judgement, focusing on outcomes.
  • Discuss risk of bias for each outcome.
  • Highlight any individual review methodological quality issues or potential biases as limitations of the ‘overview of reviews’.
  • Use ROBIS to subgroup reviews by risk of bias, identifying overemphasised findings and excluding high-risk reviews.

An expanding toolkit

Previously, the launch of  PRISMA-S provided much-needed guidance on reporting literature searches within systematic reviews, and Cochrane’s Hilda Bastian proposed solutions to ensure that systematic review protocols were robust. Now, Lunny and colleagues’ primer, and the tools therein, sit alongside initiatives from the LATITUDES Network to form part of a drive to reduce bias in evidence synthesis.

————————————————–

Do you use a specific tool(s) when synthesising evidence from systematic reviews?

]]>
https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/feed/ 0 16233
LATITUDES: a free one-stop-shop for “risk-of-bias” tools https://thepublicationplan.com/2024/03/27/latitudes-a-free-one-stop-shop-for-risk-of-bias-tools/ https://thepublicationplan.com/2024/03/27/latitudes-a-free-one-stop-shop-for-risk-of-bias-tools/#respond Wed, 27 Mar 2024 10:45:11 +0000 https://thepublicationplan.com/?p=15437

KEY TAKEAWAYS

  • The LATITUDES Network provides a much-needed resource for identifying and accessing reliable validity assessment tools, for use during evidence synthesis.
  • LATITUDES is a parallel resource to the EQUATOR Network, with both acting to disseminate best practice in health research.

The LATITUDES Network was launched in late 2023 as a parallel resource to the EQUATOR Network. LATITUDES provides key validity assessment tools as part of a drive to “disseminate best methods and practice” for health research studies.

Why LATITUDES?

Although the EQUATOR Network has been instrumental in promoting transparent and accurate reporting guidelines, an equivalent resource for accessing appropriate, validated, and reliable critical appraisal tools has been lacking to date. These tools, also known as validity assessment tools, assess study quality in terms of:

  • risk of bias or systematic error
  • applicability of findings to real-world settings
  • reporting quality.

How will LATITUDES help?

The LATITUDES Library lists tools that meet 4 specified inclusion criteria, such as being applicable to the wider research community, and assessing multidimensional aspects of validity within a study. Those which fulfil an additional 4 criteria are designated LATITUDES key tools. These tools:

  • focus on risk of bias
  • are developed by multidisciplinary teams
  • avoid use of summary numerical quality scores
  • incorporate domain specific or overall assessment of risk of bias.

The LATITUDES Network believes the library will benefit “anyone needing to assess the validity of their evidence base as part of an evidence synthesis”.

The library will benefit “anyone needing to assess the validity of their evidence base as part of an evidence synthesis”.

How to get started?

The LATITUDES Network provides a range of resources that can help researchers get to grips with validity tools — including guidance on which tool to use and training resources — or enable them to register a tool for inclusion.

————————————————–

What do you think – will the resources provided by the LATITUDES Network improve the quality of validity assessment in evidence synthesis?

]]>
https://thepublicationplan.com/2024/03/27/latitudes-a-free-one-stop-shop-for-risk-of-bias-tools/feed/ 0 15437
Breaking barriers: challenging English-language dominance in scientific publishing https://thepublicationplan.com/2024/02/13/breaking-barriers-challenging-english-language-dominance-in-scientific-publishing/ https://thepublicationplan.com/2024/02/13/breaking-barriers-challenging-english-language-dominance-in-scientific-publishing/#respond Tue, 13 Feb 2024 09:07:11 +0000 https://thepublicationplan.com/?p=15157

KEY TAKEAWAYS

  • The English language dominates scientific publishing, which creates multiple barriers for non-native English speakers.
  • Practical initiatives from journals, alongside the use of new technologies, could remove these barriers and improve access to science.

Scientific research is expanding globally, but the persistent dominance of the English language in scientific publishing creates disadvantages for non-native English speakers. An editorial in Nature Human Behaviour discusses the impact of these barriers and what can be done to help.

Existing barriers

As highlighted by the editorial team, English-language dominance negatively impacts international science and decision-making. It also has a detrimental effect on researchers themselves. The group describe several instances of barriers experienced by non-native English scholars, particularly during the peer review process. These include:

  • worse peer review outcomes, “probably due to reviewer and editor bias”
  • differences in review outcomes, unless author identities were blinded
  • bearing the resource burden and costs associated with any multi-lingual publishing or translation.

Removing barriers

The team describe how Nature Human Behaviour “strives for greater diversity, equity, and inclusion”, and the practical steps taken to try to achieve this. These include:

  • publishing non-English-language translations and summaries to increase accessibility
  • welcoming the inclusion of supplementary material in other languages
  • considering research for peer review, regardless of language quality
  • ensuring that correcting linguistic errors is not the responsibility of authors and peer reviewers, and that such errors are not barriers to publication.

Future directions

The editorial team also discuss the potential for new technologies, such as artificial intelligence (AI) tools, to improve the situation for non-native English speakers. For example, AI-based translation tools could help authors and journals to translate work into English, or allow readers to translate work published in English into their native language at the click of a button. In the meantime, the practical initiatives highlighted by Nature Human Behaviour serve as an example to everyone in medical publishing of steps we could all take to help remove language barriers.

The use of AI tools may help to overcome language barriers in scientific publishing.

————————————————

Do you routinely incorporate non-English-language summaries in your publications?

]]>
https://thepublicationplan.com/2024/02/13/breaking-barriers-challenging-english-language-dominance-in-scientific-publishing/feed/ 0 15157
Double blinding in peer review: does author anonymity have benefits? https://thepublicationplan.com/2024/01/09/double-blinding-in-peer-review-does-author-anonymity-have-benefits/ https://thepublicationplan.com/2024/01/09/double-blinding-in-peer-review-does-author-anonymity-have-benefits/#respond Tue, 09 Jan 2024 09:35:18 +0000 https://thepublicationplan.com/?p=14757

KEY TAKEAWAYS

  • Single blinding in peer review may be subject to unconscious bias, putting authors from wealthier, English-speaking countries at an advantage.
  • Double blinding can improve equity in peer review and even increase reviewer numbers, thus creating a more time-efficient process.

The shortfalls of the current peer review process have long been debated. Now, a real-life experiment conducted by a journal’s own editorial team leads them to call time on the traditional single-blind peer review system, in favour of a more equitable double-blind approach.

How does double-blind review improve equity?

Writing in a post on the LSE Impact Blog, Professor Charles Fox (Executive Editor of the journal in question, Functional Ecology, at the start of the experiment), explained his team’s conclusions that:

Authors from wealthy countries and those with higher levels of English language proficiency receive an advantage under current peer review processes.

In the study, around 3,700 papers submitted to the journal over a period of 3 years were randomly assigned to either single-blind (ie, only the reviewer was anonymous) or double-blind (ie, reviewers and authors were anonymous) peer review. The group found that:

  • single blinding can be subject to positive biases relating to the country of origin of authors
  • authors from wealthier, English-speaking countries received higher scores and were more likely to be invited to proceed to the revisions stage, when their identities were known
  • no biases were identified in relation to gender.

As a result of the experiment, all peer review at Functional Ecology is now double blind.

What are the challenges with double-blind review?

Prof. Fox acknowledges that concerns exist around potential costs and limitations associated with double blinding, and that there is a generally held assumption that individuals would be less keen to review under such a system. The group, however, found the reverse to be true, with double blinding leading to increased reviewer numbers and a more time efficient process.

Issues can also arise in maintaining the anonymity of authors, for a number of reasons:

  • Individuals can be recognisable to their peers because of their specialism, research methods, etc.
  • Manuscripts may have been previously submitted to a preprint server or earlier data published as part of a longer-term trial.

Sixty percent of reviewers in the study stated that they knew, or suspected, the identity of authors despite anonymisation, and in 90% of these cases they were correct.

What is the best way forward?

While the study’s findings reinforce existing evidence that unconscious bias exists within peer review, single blinding is still standard practice for most journals. Some offer optional anonymisation (ie, authors can choose whether to be identified), but Prof. Fox argues that this does not go far enough. In an ‘opt in’ system, authors from more affluent countries – who are more likely to benefit from positive bias – would perhaps be unlikely to hide their identity.

Overall, Prof. Fox maintains that any potential challenges associated with double blinding do not outweigh the benefits of improved objectivity. He calls on journals to follow Functional Ecology’s example and make the switch to mandatory double-blind peer review.

————————————————–

What do you think – is mandatory double blinding feasible in peer review?

]]>
https://thepublicationplan.com/2024/01/09/double-blinding-in-peer-review-does-author-anonymity-have-benefits/feed/ 0 14757
Author reports of potential conflicts of interest: room for improvement https://thepublicationplan.com/2023/10/18/author-reports-of-potential-conflicts-of-interest-room-for-improvement/ https://thepublicationplan.com/2023/10/18/author-reports-of-potential-conflicts-of-interest-room-for-improvement/#respond Wed, 18 Oct 2023 13:49:42 +0000 https://thepublicationplan.com/?p=14498

KEY TAKEAWAYS

  • Inaccurate author disclosures continue to be an issue in medical publishing. A recent study shows that most authors fail to report, or under-report, ‘potential conflicts of interest’.
  • The study’s authors call for action from journals to help remove stigma and increase transparency.

The fully transparent disclosure of relationships between authors of scientific research and other stakeholders is paramount to maintaining the credibility of research and upholding public confidence. Nevertheless, inadequacies in reporting practices remain a challenge. A recent study by Dr Mary Guan and colleagues shed more light on current practices through a detailed comparison of author-disclosed ‘potential conflicts of interest’ versus pharma-reported payments to healthcare professionals.

What did the research reveal?

Guan et al. reviewed disclosures from the first, second, and final US authors of 150 clinical manuscripts from the top 3 US rheumatology journals, in articles from January 2019 onwards. The researchers then compared this information with entries in the Open Payments database. The group’s analyses yielded some surprising findings:

  • Disclosures were inaccurate in 92% of papers that involved authors deemed to have ‘potential conflicts of interest’.
  • Of the 135 authors with ‘potential conflicts of interest’, 87% disclosed inaccurately.
  • Where data were available, the total monetary value of undisclosed potential conflicts was found to be nearing $5.2 million. For those that were ‘under-disclosed’, the total value was just above $4.1 million.
  • Among the 14 papers that reported clinical trial data, all authors failed to report a potential conflict of interest and in some cases also under-reported potential conflicts.

So, what can we do to improve reporting accuracy?

In recent years, the International Committee of Medical Journal Editors moved to using the term ‘disclosure of relationships’ rather than ‘conflicts of interest’. This was in part to ensure that guidance was simple for authors to follow in a consistent way: all relationships should be disclosed, and readers draw their own conclusions as to which may constitute a potential conflict of interest. Guan et al. point out that perceived stigma surrounding the term ‘potential conflict of interest’ could also deter authors from accurate reporting, and that a more neutral term may encourage better compliance. Furthermore, they propose that “journals must clearly articulate their reporting expectations and also must clearly emphasise that industry payments do not, a priori, impair the validity of a manuscript”.

“Journals must clearly articulate their reporting expectations and also must clearly emphasise that industry payments do not, a priori, impair the validity of a manuscript”.

————————————————–

Which strategy would be most effective at improving the accuracy of author disclosures?

]]>
https://thepublicationplan.com/2023/10/18/author-reports-of-potential-conflicts-of-interest-room-for-improvement/feed/ 0 14498
How accurate are conflict-of-interest disclosures in high-impact journals? https://thepublicationplan.com/2022/11/03/how-accurate-are-conflict-of-interest-disclosures-in-high-impact-journals/ https://thepublicationplan.com/2022/11/03/how-accurate-are-conflict-of-interest-disclosures-in-high-impact-journals/#respond Thu, 03 Nov 2022 18:58:56 +0000 https://thepublicationplan.com/?p=12494

KEY TAKEAWAYS

  • The accuracy of COI disclosures, including those in high-impact journals, remains questionable, despite adoption of the ICMJE disclosure form.
  • Baraldi et al call for readers to compare COI disclosures in journals with payment data provided by the medical industry to prevent potential bias going unnoticed.

Conflict-of-interest (COI) disclosure is an important tool for identifying potential bias associated with medical research. Although efforts to improve COI disclosure have been made by various parties (including the US Government, The International Committee of Medical Journal Editors (ICMJE), and medical journals themselves), issues around transparent reporting remain.

To assess the scale of the problem in high-impact journals, Baraldi et al examined original clinical-trial research articles published in NEJM and JAMA in 2017, and compared physician-authors’ self-disclosures of general payments with their Open Payments data. The payments were categorised as ‘disclosed,’ ‘undisclosed,’ ‘indeterminate,’ or ‘unrelated’, per definitions based on the ICMJE form used by both journals:

  • Disclosed: The author disclosed a payment from a company that matched the data from Open Payments.
  • Undisclosed: The author received a payment during the relevant disclosure period that did not match any disclosures provided to the journal, AND the company offers, or offered at the time of the payment, a product that could broadly be considered related to the area of inquiry.
  • Indeterminate: The author received a payment during the relevant disclosure period that did not match any disclosures provided to the journal, BUT the company was a subsidiary or parent company of a company listed on the disclosure, AND/OR it could not be determined whether that company offers, or offered at the time of the payment, a product that could broadly be considered related to the area of inquiry, AND/OR the payment has been disputed.
  • Unrelated: The payment was not disclosed, AND the company from which the payment originated does not offer a product that could broadly be considered related to the area of inquiry.”

Thirty-one articles each from NEJM and JAMA met the study inclusion criteria, totalling 118 unique physician-authors. Of the 106 (90%) authors who received general payments, 86 (81%) received undisclosed payments, with 18 (21%) and 33 (38%) of those disclosing less than half and none of their payment amounts, respectively. No significant difference in COI disclosure rates was found between NEJM and JAMA authors.

Of the authors who received general payments, 86 (81%) received undisclosed payments, with 18 (21%) and 33 (38%) of those disclosing less than half and none of their payment amounts, respectively.

Baraldi et al concluded that self-disclosure of COIs is insufficient to ensure accurate reporting of potential bias, and call for readers to compare COI disclosures in journals with payment data provided by the medical industry.

—————————————————–

What do you think – would requiring US-based physicians to provide links to their Open Payments reports with their manuscript submissions improve the accuracy of COI disclosures?

]]>
https://thepublicationplan.com/2022/11/03/how-accurate-are-conflict-of-interest-disclosures-in-high-impact-journals/feed/ 0 12494
Language-generating AI in science: transformational or deformational? https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/ https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/#respond Thu, 13 Oct 2022 14:57:18 +0000 https://thepublicationplan.com/?p=12376

KEY TAKEAWAYS

  • Language-generating artificial intelligence could have an empowering impact in science, but non-transparency and oversimplification of complex data could threaten scientific professionalism.
  • Authors call on government bodies to enforce systematic regulation to help realise the potential of large language models in science.

Large language models (LLMs) are artificial intelligence algorithms that recognise, summarise, and generate human language from very large text-based datasets. LLMs could well empower scientists to draw information from big data; however, researchers from the University of Michigan are concerned that without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

A recent report examined the potential social change brought about by LLMs. In a subsequent Nature Q&A, the report’s co-author, Professor Shobita Parthasarathy, described the impact of LLMs in the scientific disciplines. She highlighted the potential for LLMs to help large scientific publishers to automate aspects of peer review, generate scientific queries, and even evaluate results, but cautioned that without systematic regulation, LLMs could exacerbate existing inequalities and oversimplify complex data.

Without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

Developers are not required to disclose the accuracy of an LLM, and the models’ processes are not transparent, meaning that users could be unaware that LLMs can make errors, include outdated information, and remove important nuances. Furthermore, readers are unable to distinguish LLM-generated text from human-generated text, thereby highlighting that the technology could be employed to distribute misinformation and generate fake scientific articles.

For the potential of LLMs to be realised in science, Prof Parthasarathy calls on government bodies to enforce transparency in their use, stipulating that those who develop LLMs should disclose the models’ processes and make clear where LLMs have been used to generate an output.

—————————————————–

Do you think large language models could benefit science if appropriately regulated?

]]>
https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/feed/ 0 12376
Tracking diversity in scientific publishing – a new global initiative https://thepublicationplan.com/2022/04/28/tracking-diversity-in-scientific-publishing-a-new-global-initiative/ https://thepublicationplan.com/2022/04/28/tracking-diversity-in-scientific-publishing-a-new-global-initiative/#respond Thu, 28 Apr 2022 13:14:13 +0000 https://thepublicationplan.com/?p=11194

KEY TAKEAWAYS

  • A group of 52 publishers representing >15,000 journals will track the gender, race, and ethnicity of their authors, reviewers, and editors.
  • The collected data will be used to improve diversity in scientific publishing.

Minority groups are often under-represented in science, yet there is a lack of data on how structural racism and bias influence the content published in journal articles. In a recent Nature featureHolly Else and Dr Jeffrey M. Perkel outline journals’ plan to track researcher diversity and improve inclusion in scholarly publishing.

The initiative originated in 2020 with a group of 11 publishers, led by the Royal Society of Chemistry, who pledged to track and reduce bias in science publishing. The joint group has expanded and now includes 52 publishers representing over 15,000 journals around the globe, including the BMJ, Elsevier, Springer Nature, and Wiley. Its members agreed to ask those who authored, reviewed, or edited manuscripts about their gender, race, and ethnicity.

“Diversity data enable us to define where problems such as bias lie in scholarly publishing, put in place actions, set goals, and measure progress.”

Else and Perkel highlight that computational algorithms can only provide rough estimates of ethnic and geographical origin from names, and experts believe that the best way to obtain an accurate insight is to ask scientists to self-identify. The joint group’s standardised approach to collecting voluntary, self-reported diversity data was launched earlier this month. The questions cover race using 6 categories and geographical ancestry using 11 categories, allowing the respondents to select all applicable options (publishers are encouraged to include an additional “Self-describe”/”Other” option for both questions). The gender question asks the respondents to self-identify as a man, woman, or non-binary/gender-diverse individual. The answers will be collected through editorial management systems and stored separately with restricted access, so they will not be visible to peer reviewers.

The article includes opinions from researchers who see such initiatives as a first step toward dismantling systemic barriers affecting minority researchers and highlight the importance of acting on the findings by updating publishing policies.

—————————————————–

Would you be willing to disclose your gender, race, and ethnicity during manuscript submission?

]]>
https://thepublicationplan.com/2022/04/28/tracking-diversity-in-scientific-publishing-a-new-global-initiative/feed/ 0 11194
Shedding light on industry influence over healthcare https://thepublicationplan.com/2022/01/21/shedding-light-on-industry-influence-over-healthcare/ https://thepublicationplan.com/2022/01/21/shedding-light-on-industry-influence-over-healthcare/#respond Fri, 21 Jan 2022 09:31:41 +0000 https://thepublicationplan.com/?p=10608

KEY TAKEAWAYS

  • The medical product industry is deeply connected to all players in the healthcare ecosystem via financial and non-financial ties, which are seldom transparently reported.
  • If left unregulated, these conflicts of interest can impact patient care.

The medical product industry has extensive relationships with virtually every player in the healthcare ecosystem, though these conflicts of interest are rarely described or quantified, according to a study published in The BMJ.

Dr Susan Chimonas and co-authors performed a scoping review of 538 publications covering 37 countries to identify and characterise ties between the medical product industry (pharmaceutical, medical device, and biotechnology companies) and individuals/organisations in healthcare. The latter represented a broad spectrum of constituents making up the healthcare ecosystem: government (eg, regulatory agencies), market supply chain (eg, purchase and distribution agents), healthcare profession (eg, medical schools, journals), and non-profits (eg, advocacy organisations).

A panel of experts was enlisted to validate the team’s findings and aid in mapping the complex relationships. The resulting network map demonstrated that industry players have permeated the healthcare ecosystem, both directly and indirectly, maintaining some level of influence across the following activities:

  • research
  • clinical care
  • health professional education
  • guideline development
  • formulary selection.

Industry involvement was highest in research (disclosed in 56% of the publications) and lowest in formulary selection (1%).

Industry involvement was highest in research (disclosed in 56% of the publications) and lowest in formulary selection (1%).

Finally, the team assessed whether these conflicts of interest were regulated or documented systematically. While financial ties were sometimes subject to national and/or international oversight (such as the US government’s Open Payments website), non-financial ties were notably less transparent. For instance, the authors found no evidence of oversight for relationships between the industry and public officials, regulators, public health agencies, payers, or purchasing and distribution agents.

These complex, often undisclosed relationships trickle down to affect patient care. The article supplement offers a poignant example of how industry funding and non-financial pressure contributed to the opioid epidemic, which has to date resulted in hundreds of thousands of deaths.

“Medical product industry influence could undermine healthcare equity and sustainability by driving up costs for individual patients and the healthcare system overall”.

The authors cautioned that there are likely many more examples of harm caused by this kind of industry influence, and that without greater regulation, reduced healthcare equity and affordability are to be expected.

—————————————————–

In your opinion, who is most responsible for encouraging transparency in the medical product industry?

]]>
https://thepublicationplan.com/2022/01/21/shedding-light-on-industry-influence-over-healthcare/feed/ 0 10608
Are Registered Reports living up to expectations? https://thepublicationplan.com/2021/12/09/are-registered-reports-living-up-to-expectations/ https://thepublicationplan.com/2021/12/09/are-registered-reports-living-up-to-expectations/#respond Thu, 09 Dec 2021 11:12:29 +0000 https://thepublicationplan.com/?p=10394

KEY TAKEAWAYS

  • Registered Reports (RRs) – a publication format offering peer review and in-principle acceptance before research is conducted – are gaining popularity.
  • A recent study has shown RRs improve research quality versus traditional papers, without impacting innovation.

The Registered Reports (RRs) publishing format is gaining popularity, with over 300 journals now offering the option. With RRs, the first stage of peer review and in-principle acceptance occur before study outcome is known, which means that publication decisions are based on importance of the research question and methodological rigour, and are not influenced by interesting, novel, or negative results. RRs are designed to improve research quality by addressing publication bias, but only recently have they been demonstrated to reach their goal.

RRs “offer clear and tangible benefits to improving research and the research culture”.

In a recent article in Nature Human Behaviour, Dr Courtney Soderberg and colleagues presented findings from an observational investigation of perceptions of the quality and importance of RRs versus papers published using the traditional model. The authors asked 353 researchers to each peer review a pair of articles – one from 29 published RRs in psychology or neuroscience and a matched non-RR comparison paper. The articles were evaluated across 19 outcome criteria, including:

  • quality
  • rigour
  • novelty
  • creativity
  • importance of the methodology and findings.

RRs numerically outperformed standard articles across all outcome criteria. Their strongest performance advantage was in rigour of methodology and analysis, some of the key aspects of peer review. Notably, RRs were rated similarly to comparison papers in importance, novelty of research question, and creativity of methodology, which held even among reviewers who admitted to being sceptical or neutral about RRs. These results address sceptics’ concerns that the planning required for RRs could hinder innovation and promote ‘boring’ research.

The authors propose that RRs re-incentivise researchers from producing novel or positive findings towards conducting and publishing rigorous research on important questions, without impacting research creativity.

The authors propose that RRs re-incentivise researchers from producing novel or positive findings towards conducting and publishing rigorous research on important questions, without impacting research creativity. While follow-up studies are needed to assess the generality of these conclusions, RRs “offer clear and tangible benefits to improving research and the research culture”. With the benefits now shown, we look forward to seeing how the use of RRs will develop across the publishing field.

—————————————————–

Would you consider publishing a Registered Report?

]]>
https://thepublicationplan.com/2021/12/09/are-registered-reports-living-up-to-expectations/feed/ 0 10394