Reproducibility – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 29 Oct 2025 15:38:20 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Reproducibility – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Safeguarding scientific image quality and integrity: what more can be done? https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/ https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/#respond Wed, 29 Oct 2025 15:38:19 +0000 https://thepublicationplan.com/?p=18377

KEY TAKEAWAYS

  • Scientific image editing serves a vital role in clear communication, but seeking presentation clarity must not compromise data integrity.
  • Combatting image manipulation requires systematic collaboration across the research ecosystem, including standardised guidelines and new verification technologies.

As concerns mount over image manipulation in scientific publishing, the research community has begun developing new strategies to balance visual clarity with data integrity. Writing in Nature, Sara Reardon explores the “fine line between clarifying and manipulating”, highlighting the challenge of making figures both accessible and faithful to original data.

The art and science of visual presentation

Scientific images often require editing for clarity, like adjusting brightness, adding scale bars, or enhancing contrast. While such modifications are essential for effective scientific communication, a 2021 study by Helena Jambor and colleagues revealed that poorly presented figures remain surprisingly common, suggesting researchers need better training in visual data presentation.

When enhancement becomes manipulation

The boundary between legitimate clarification and misconduct can be perilously thin. Science integrity consultant Elisabeth Bik warns that even minor edits – such as cloning image sections to cover dust particles – can undermine data credibility. Echoing a seminal 2004 article, Bik emphasises that “the images are the data”, meaning they should present the results actually observed rather than those the researchers expected. Any undisclosed alteration that changes the scientific message could constitute misconduct. As Reardon notes, the cardinal rule remains to “show your work” – enhancing clarity without obscuring underlying data.

“The boundary between legitimate clarification and misconduct can be perilously thin… the cardinal rule remains to ‘show your work’ – enhancing clarity without obscuring underlying data.”

Detection and prevention strategies

Phill Jones examines potential systemic solutions to what Bik calls science’s “nasty Photoshop problem” in The Scholarly Kitchen. Journals increasingly conduct pre-publication screening using image-integrity specialists or AI tools that have demonstrated substantial promise in identifying manipulated images. Guidelines such as those from the International Association of Scientific, Technical & Medical Publishers aim to standardise best practice, while individual journals are also establishing specific image integrity requirements. Beyond journals:

  • Institutions are urged to provide training and embed image integrity expectations into research culture.
  • Post-publication peer-review platforms also play a role in identifying problematic images after publication.

Looking ahead, technical innovations offer promise. Jones highlights developments such as encrypted hashes and digital ‘signatures’ embedded in images, akin to secure web certificates, that could enable reliable verification of image authenticity. Ongoing collaboration and systematic change across the research ecosystem will be required to ensure scientific images are both clear and credible.

—————————————————

Are current image integrity detection tools sufficient to prevent manipulation in scientific publishing?

]]>
https://thepublicationplan.com/2025/10/29/safeguarding-scientific-image-quality-and-integrity-what-more-can-be-done/feed/ 0 18377
Should data sharing be the next checklist item in reporting guidelines? https://thepublicationplan.com/2025/03/11/should-data-sharing-be-the-next-checklist-item-in-reporting-guidelines/ https://thepublicationplan.com/2025/03/11/should-data-sharing-be-the-next-checklist-item-in-reporting-guidelines/#respond Tue, 11 Mar 2025 09:51:52 +0000 https://thepublicationplan.com/?p=17365

KEY TAKEAWAY

  • Data sharing is an essential component of open science. The EQUATOR executive are calling for its inclusion in reporting guidelines.

Mandatory data sharing has been gaining pace in recent years, with data underlying US federally funded research soon needing to be made available immediately on publication. Since its inception in 2006, the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, best known for hosting an online library of research reporting guidelines, has been an important advocate for improving the quality and transparency of medical research reporting. Now, in a recent article for The BMJ, the EQUATOR executive group explain how data sharing could be made standard practice as the next goal for open science.

Are current reporting guidelines sufficient?

The authors describe data sharing as a broad concept, from study registration to protocol availability, then data availability. While many stakeholders have emphasised trial registration and transparency, the requirement to discuss data sharing is missing from most reporting guidelines. A notable exception is a 2020 addition to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) checklist.

“While many stakeholders have emphasised trial registration and transparency, the requirement to discuss data sharing is missing from most reporting guidelines.”

The EQUATOR executive encourage new or updated reporting guidelines to cover data sharing, to ensure authors:

  • describe data sharing in protocol and results publications
  • are encouraged to share data
  • report on items linked to data sharing, like sharing of code and protocols.

What can authors do now?

While awaiting formal guidance on reporting data sharing, the EQUATOR executive suggest authors include the following in publications:

  • data definition, collection, and management methods
  • manuals or videos used in delivering the intervention
  • the statistical analysis plan, including any coding
  • any barriers to sharing data and other materials from the study.

The EQUATOR executive also encourage authors to adopt the findable, accessible, interoperable, and reusable (FAIR) and collective benefit, authority to control, responsibility, and ethics (CARE) principles when sharing data.

What’s next?

Beyond reporting guidelines, the authors signpost journal policies, funder expectations, and research assessment criteria as avenues to drive increased data sharing. They also point to data management and sharing plans, and suggest an opportunity for the EQUATOR Network to provide guidance on reporting these in a standardised way to further boost data sharing.

————————————————–

Do you routinely include data sharing statements in your publications?

]]>
https://thepublicationplan.com/2025/03/11/should-data-sharing-be-the-next-checklist-item-in-reporting-guidelines/feed/ 0 17365
Uncovering scientific ERRORs: can financial rewards work? https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/ https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/#respond Thu, 31 Oct 2024 14:41:54 +0000 https://thepublicationplan.com/?p=16715

KEY TAKEAWAYS

  • The ERROR project pays reviewers to search for mistakes in the scientific literature, while rewarding authors who agree to participate.
  • Reviewers and authors receive bonuses depending on the extent of errors found.

Amid rising retraction rates, the scientific record is increasingly scrutinised for signs of research misconduct like fabrication and image manipulation. But what about detecting errors in the data underlying scientific publications?

The ERROR project

Modelled on tech company ‘bug bounty’ programmes, the Estimating the Reliability & Robustness of Research (ERROR) project offers cash rewards for reviewers identifying incorrect or misinterpreted data, code, statistical analyses, or citations in scientific papers. Following ERROR’s launch earlier this year, Julian Nowogrodzki reviewed the project so far in a recent article in Nature.

Professor Malte Elson and colleagues are aiming to produce a blueprint for systematic error detection that will be scalable and transferable across scientific fields. Starting with highly cited psychology papers, the first review was posted in August. ERROR plans to cover 100 publications over 4 years, expanding into artificial intelligence, medical research, and potentially preprints.

“The ERROR project offers cash rewards for reviewers identifying incorrect or misinterpreted data, code, statistical analyses, or citations in scientific papers.”

Financial incentives

The project has 250,000 Swiss francs (~£220,000) of funding from Professor Elson’s institution, the University of Bern. Reviewers can earn up to 1,000 Swiss francs each time, plus a variable bonus of up to 2,500 Swiss francs depending on the scale of errors identified. Authors receive up to 500 Swiss francs: 250 for agreeing to participate and sharing data, plus a bonus if minimal errors are found.

A challenging path

Despite the incentives, ERROR has hurdles to overcome:

  • Author buy-in: So far, authors from just 17 of 134 selected papers have agreed to participate.
  • Data access: Underlying data may have been lost or authors may cite legal reasons barring sharing.
  • Reviewer expertise: There are limited potential reviewers with sufficient technical expertise yet no conflicts of interest. Dynamics linked to seniority may also prevent some prospective reviewers taking part.

The ERROR team hopes to convince research funders to allocate money for error detection – ultimately saving them from investing in flawed research. We look forward to seeing how this project helps move the needle towards a more reproducible scientific record.

————————————————–

Do you think current ‘ad hoc’ approaches to error detection in the scientific record are sufficient?

]]>
https://thepublicationplan.com/2024/10/31/uncovering-scientific-errors-can-financial-rewards-work/feed/ 0 16715
EQUATOR and COS join forces to bring open science to the fore https://thepublicationplan.com/2024/09/17/equator-and-cos-join-forces-to-bring-open-science-to-the-fore/ https://thepublicationplan.com/2024/09/17/equator-and-cos-join-forces-to-bring-open-science-to-the-fore/#respond Tue, 17 Sep 2024 13:51:34 +0000 https://thepublicationplan.com/?p=16454

KEY TAKEAWAYS

  • A partnership between the EQUATOR Network and the Centre for Open Science (COS) could further the objectives of both organisations and raise awareness of best practices for open science.
  • Anticipated activities include educational outreach for researchers and updated reporting guidelines.

The open science movement aims to improve the transparency, accessibility, and reproducibility of scientific research. In May this year, the EQUATOR Network and Center for Open Science (COS) announced a 3-year collaboration in the hopes of accelerating the uptake of open science practices in health research through a series of shared activities.

A shared mission

Since launching the open science framework in 2012 – a project management tool designed to streamline collaboration on, and dissemination of, scientific research – COS have been on a mission to facilitate and incentivise open research practices. This approach is highly complementary to EQUATOR’s objective to improve research quality and transparency, leading the organisations to collaborate on development of the Transparency and Openness Promotion Guidelines in 2015.

Nearly a decade later, the two are joining forces officially.

What can we expect?

Planning is ongoing, but several potential strategies are being explored:

  • Educating researchers on processes such as writing and protocol creation, through a combination of outreach materials and toolkits
  • Developing toolkits to guide reviewers in assessing data sharing practices and protocol deviation
  • Increasing the visibility and use of existing tools, such as COS registration templates and EQUATOR reporting guidelines, through shared hosting
  • Integrating practices such as protocol posting, data sharing, and study replication into existing EQUATOR reporting guidelines, where these are not yet included.

In particular, COS is keen to utilise EQUATOR’s existing systems to enhance research credibility by promoting the uptake of preregistration.

The potential impact

Open science practices are already included in CONSORT, but inclusion in further reporting guidelines could scale-up adoption substantially. In addition, the robustness of EQUATOR’s reporting standards could offer further structure and visibility to COS’ ongoing research.

Director of the EQUATOR Network, David Moher, has expressed his excitement around the partnership:

Since its inception in 2006, the EQUATOR Network has worked hard to help improve comprehensive and transparent reporting of research. Collaborating with COS will help further achieve this objective.”

————————————————–

Do you think open science practices should be included in reporting guidelines?

]]>
https://thepublicationplan.com/2024/09/17/equator-and-cos-join-forces-to-bring-open-science-to-the-fore/feed/ 0 16454
Building trust: ACCORD guidelines for reporting consensus methods https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/ https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/#respond Tue, 09 Jul 2024 10:48:23 +0000 https://thepublicationplan.com/?p=16181

KEY TAKEAWAY

  • The ACCORD reporting guidelines comprise a 35-item checklist that aims to improve the transparency of reporting on consensus methods.

The COVID-19 pandemic highlighted the need for effective knowledge-sharing to guide healthcare decisions. In rapidly evolving situations, reaching consensus among experts from diverse backgrounds is crucial, especially when evidence is emergent or inconsistent. This process is best achieved using formal consensus methods.

Despite their critical role in healthcare and policy decision-making, consensus methods are often inadequately reported, leading to inconsistencies and lack of transparency. To address these issues, the ACcurate COnsensus Reporting Document (ACCORD) project was established to develop comprehensive guidelines for reporting the numerous consensus methods used in medical research.

The ACCORD reporting guidelines aim to enhance trust in the recommendations made by consensus panels, benefiting authors, journal editors, reviewers, and, ultimately, patients through more reliable healthcare recommendations.

The ACCORD checklist was formulated using the EQUATOR Network’s methodology for developing reporting guidelines, with the full study protocol published in Research Integrity and Peer Review. The project began with a systematic review, followed by 3 rounds of the Delphi process and several steering committee meetings. To ensure a comprehensive perspective, a diverse panel was engaged, comprising 72 participants from 6 continents and various professional backgrounds, including clinical, research, policy, and patient advocacy. Through this rigorous process, a preliminary checklist was refined to a final list of 35 essential items covering all sections of a manuscript.

The ACCORD reporting guidelines aim to enhance trust in recommendations made by consensus panels, benefiting authors, journal editors, reviewers, and ultimately patients through more reliable healthcare recommendations.

————————————————–

What do you think – will the ACCORD guidelines improve the transparency of reporting on consensus methods?

]]>
https://thepublicationplan.com/2024/07/09/building-trust-accord-guidelines-for-reporting-consensus-methods/feed/ 0 16181
Monitoring open science: what key practices should we measure? https://thepublicationplan.com/2023/07/18/monitoring-open-science-what-key-practices-should-we-measure/ https://thepublicationplan.com/2023/07/18/monitoring-open-science-what-key-practices-should-we-measure/#respond Tue, 18 Jul 2023 10:30:23 +0000 https://thepublicationplan.com/?p=14180

KEY TAKEAWAYS

  • Stakeholders at biomedical research institutions agreed on 19 key open science practices that should be monitored to help institutions track their progress.
  • Selected open science practices include prospective registration and timely reporting of clinical trials, specifying information to be shared, publishing open access, and disclosing author and funder information.

Open science aims to improve the quality and reproducibility of research through increasing transparency and accessibility. Following the 2021 adoption of the United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on Open Science, the US National Institutes of Health (NIH) and World Health Organization (WHO) have also introduced data sharing policies. ‘Success’ in open science generates effective research outcomes, reduces costs, and improves reputation. However, questions remain on which practices are the most important and how they should be monitored.

A recent publication in PLoS Biology shared results from a 3-round Delphi study asking which open science practices should be monitored at biomedical research institutions globally. 80 participants, including researchers, open science specialists, and librarians, contributed to the study. They reached consensus on 12 traditional and 7 broader transparency practices that should be monitored, including several also assessed in PLOS’ Open Science Indicators initiative.

The key open science practices were related to:

  • registration and timely reporting for clinical trials and systematic reviews
  • information sharing, eg, stating whether data, study materials, or code were shared openly at the time of publication, and the use of clear licences for any shared information
  • use of persistent identifiers, including ORCID identifiers for researchers and digital object identifiers (DOIs) for shared data, study materials, or code
  • use of open access publication (including whether open access is immediate or delayed)
  • disclosure of author contributions, conflicts of interest, and funding.

Dr Kelly D. Cobey and co-authors now plan to develop a fully automated open science dashboard interface, incorporating the agreed core open science practices where these can be measured accurately and reliably. Dr Cobey and co-authors believe the tool will allow institutions to track their progress adopting open science practices and adhering to mandates, and that it will facilitate open science meta-research, without evoking competition or indicating prestige at the institutional level.

—————————————————–

What do you think – would institutions benefit from an automated dashboard to monitor open science progress?

]]>
https://thepublicationplan.com/2023/07/18/monitoring-open-science-what-key-practices-should-we-measure/feed/ 0 14180
The reality of incomplete reporting: can journal editors do more to help? https://thepublicationplan.com/2023/07/06/the-reality-of-incomplete-reporting-can-journal-editors-do-more-to-help/ https://thepublicationplan.com/2023/07/06/the-reality-of-incomplete-reporting-can-journal-editors-do-more-to-help/#respond Thu, 06 Jul 2023 14:33:38 +0000 https://thepublicationplan.com/?p=14152

KEY TAKEAWAYS

  • A recent systematic review revealed that incomplete reporting of interventional studies remains a widespread issue.
  • The authors encourage journals to require a reporting checklist as part of the submission process.

An extensive repository of openly accessible reporting guidelines to aid complete reporting of interventional studies is available at the click of a button. However, a recent commentary published in Trials highlights that incomplete reporting remains a substantial problem.

The commentary describes a systematic synthesis of 51 randomised controlled trials reporting on 53 school-based physical activity interventions published between 2015 and 2020. Despite a growth in the availability and promotion of reporting guidelines since previous reviews were conducted, only one training programme (ie 2% of those analysed) provided complete information covering all intervention components. Even simple information, such as the intervention location, was absent from most reports.

Of the 33 journals that published articles included in the review, only one required reporting checklists for all aspects of the intervention to be submitted.

Strikingly, of the 33 journals that published articles included in the review, only one required reporting checklists for all aspects of the intervention to be submitted. Ryan et al contacted the editors of the other 32 journals suggesting that they update their submission guidelines to include mandatory submission of a reporting checklist. Twenty seven journals responded, 26% of whom welcomed the advice and amended their submission guidelines accordingly.

The authors stress that current systems, including journal submission policies, are allowing wasteful practices to continue. They ask, “how much more waste will be tolerated before action is taken?”

But, why is incomplete reporting a problem anyway?:

  • Incomplete reporting impedes the readers’ interpretation of study results and prevents effective replication.
  • This ultimately leads to poor outcomes for both study funders and participants.

The authors recommend a resource from the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network to help spark discussion about complete reporting among researchers, editorial teams, funders, and reviewers. They also signalled a clear call to action: all journals that publish interventional research should review their submission requirements to mandate completion of reporting checklists.

—————————————————–

Do you submit a reporting checklist during the submission process?

]]>
https://thepublicationplan.com/2023/07/06/the-reality-of-incomplete-reporting-can-journal-editors-do-more-to-help/feed/ 0 14152
How does failure to falsify influence the reliability of scientific research? https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/ https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/#respond Fri, 23 Jun 2023 14:52:38 +0000 https://thepublicationplan.com/?p=14078

KEY TAKEAWAYS

  • Failure to test and refute prominent hypotheses reduces confidence in the reliability of scientific results and hinders scientific progress.

Across many scientific fields there is a well-documented reproducibility crisis that is damaging trust in the reliability of research data. In a recent article published in eLife, Dr Sarah Rajtmajer and co-authors discuss how failure to falsify (refute) strong hypotheses through direct testing has contributed to the problem.

As a case study, the authors highlight two prominent and seemingly contradictory hypotheses in the field of connectomics:

  • Hyperconnectivity hypothesis: brain injury results in an enhanced functional network response.
  • Disconnection hypothesis: brain injury results in reduced functional connectivity.

Instead of deliberate attempts to challenge either of these positions, the research area has seen the publication of a large number of small studies examining under-specified hypotheses, which has done little to bring clarity to the existing body of literature. The authors argue that the ‘science-by-volume’ culture, coupled with the overuse of inappropriate statistical tests and lack of falsification attempts, fosters a research environment in which the quantity of scientific findings continues to grow, but the depth of understanding remains stagnant.

The article calls out the big data revolution as a factor adding to these concerns. The ability to analyse large datasets in different ways can produce false or coincidental correlations, particularly if the statistical methodologies used are not robust.

The strongest hypotheses are specific, easily testable, and clearly indicate the evidence needed to disprove their predictions.

According to Rajtmajer et al., the strongest hypotheses are specific, easily testable, and clearly indicate the evidence needed to disprove their predictions. The authors suggest embracing a ‘team science’ approach, where groups of scientists work together to form opposing hypotheses, design experiments to test them, and agree on the outcomes that would support or refute them.

Implementing a falsification approach, whereby every observation confirms or refutes a hypothesis, would be challenging in everyday research practice. However, the authors believe that regular attempts to falsify a hypothesis could guide the direction of scientific research and enhance the reliability of published science, particularly if combined with other processes aimed at improving data transparency.

Regular attempts to falsify a hypothesis could guide the direction of scientific research and enhance the reliability of published science.

—————————————————–

Could placing a greater emphasis on hypothesis testing and falsification help solve the reproducibility crisis in scientific research?

]]>
https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/feed/ 0 14078
Is it time to redesign peer review? https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/ https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/#respond Thu, 27 Apr 2023 17:04:30 +0000 https://thepublicationplan.com/?p=13665

KEY TAKEAWAYS

  • Breaking peer review into stages could decrease the burden on expert reviewers and improve the quality of published research.

Peer review is a key part of scholarly publishing; however, there have been increasing calls to shift away from the traditional peer review model to make the process more efficient and sustainable. In a Nature World View article, Professor Olavo B. Amaral describes an alternative approach to peer review that could improve data quality and transparency, and lessen the burden on peer reviewers.

Conventional peer review relies on expert referees to evaluate an article’s claims and its suitability for publication in the target journal. Due to time constraints, the underlying data are rarely scrutinised, potentially allowing errors and fraudulent results to go undetected.

Prof. Amaral believes that every manuscript should undergo basic checks to ensure that the data are complete and consistent, calculations are correct, and analyses are reproducible, but that only select articles, such as those of special interest, should be sent out for expert review. Such an approach would allow peer reviewers to use their time more effectively, on papers for which the data have been validated.

“Not all research needs to be reviewed by an expert. Much of the low hanging fruit of quality control doesn’t need a specialist — or even a human.”

Although certain aspects of manuscript quality control could be automated, algorithms work best on structured text, and most scientific fields do not have standardised formats for presenting results. A more fundamental problem is that data checks cannot verify that the data were collected as reported and have not been ‘cherry-picked’. To address this issue systematically, Prof. Amaral suggests that the focus should switch from scrutinising manuscripts to quality control of research practices, as proposed by frameworks such as Enhancing Quality in Preclinical Data (EQIPD). Implementing this change could not only make peer review more viable but could also improve data reproducibility and increase trust in published research.

Prof. Amaral calls on field experts to develop guidelines for data standardisation and urges funding agencies to facilitate the efforts to improve data collection and reporting by, for example, rewarding researchers for having specific aspects of their results certified.

—————————————————–

In your opinion, would breaking peer review into stages and employing algorithms for basic quality checks improve the sustainability of the current peer review system?

]]>
https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/feed/ 0 13665
Is enough being done to account for the role of sex in medical research? https://thepublicationplan.com/2023/03/28/is-enough-being-done-to-account-for-the-role-of-sex-in-medical-research/ https://thepublicationplan.com/2023/03/28/is-enough-being-done-to-account-for-the-role-of-sex-in-medical-research/#respond Tue, 28 Mar 2023 17:34:56 +0000 https://thepublicationplan.com/?p=13479

KEY TAKEAWAYS

  • Reporting guidelines recommend that researchers factor the role of sex into animal and clinical studies, but progress in adherence to these guidelines has been slow.
  • Sex-based analyses have led to some key medical discoveries, and researchers are encouraged to examine data for sex differences to enhance study reproducibility and open up questions for scientific pursuit.

Medical research funders and publishers are increasingly calling for the role of sex to be considered in preclinical and clinical studies. In a recent Nature News Feature article, Dr Emily Willingham highlights the importance of reporting sex differences in medical research and examines why progress in this area has been slow.

Sex as a variable has important health implications. A recent example is COVID-19, which has higher mortality in men but affects more women in the form of long COVID. Accounting for sex can enhance the scientific rigour and reproducibility of a study, and even if there are no sex-based differences to report, negative findings are still informative.

Accounting for sex can enhance the scientific rigour and reproducibility of a study, and even if there are no sex-based differences to report, negative findings are still informative.

Yet, since the thalidomide tragedy in the late 1950s, women of childbearing age have been under-represented in clinical trials. Progress was made in the early 1990s, when the US National Institutes of Health (NIH) began requiring that women are included in clinical research. Both the NIH and EU now call for both sexes to be included in cell and animal studies.

In 2016, Dr Shirin Heidari led the publication of the Sex and Gender Equity in Research (SAGER) reporting guidelines, with the aim of encouraging authors to consider sex and gender differences in scientific publications. However, progress in adherence to these guidelines has been slow. An analysis of 720 papers published in 34 biology journals in 2009 and 2019 found that although the proportion of sex-inclusive studies had risen, the proportion incorporating sex-based analyses had decreased from 50% to 42%. Another study reported that even when sex is considered as a variable, treatment effects are often not compared properly between sexes, leading to misinterpretation of data.

The reasons for the relatively slow uptake of sex inclusion and reporting policies include:

  • general resistance to change – some journals assert that the SAGER guidelines are not applicable to their fields
  • cost – mice studies that include two sexes require more animals, which adds expense
  • complexity of sex – some researchers argue that a binary definition based on specific anatomy or chromosome numbers is too limiting.

Encouragingly though, since sex inclusion guidelines were put in place, important medical discoveries have been made. One key finding is that risk of cardiovascular disease begins to rise at a lower blood pressure in women than in men – a revelation that came about from a call for studies looking specifically at sex differences in health outcomes. Considering the potential implications for medicine, we hope to see more researchers incorporate sex-specific analyses in their studies.

—————————————————–

Do you follow sex and gender reporting guidelines when writing your research manuscripts?

]]>
https://thepublicationplan.com/2023/03/28/is-enough-being-done-to-account-for-the-role-of-sex-in-medical-research/feed/ 0 13479