Ethics – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 17 Dec 2025 15:05:57 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Ethics – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 When politics meets publishing: researchers fight back https://thepublicationplan.com/2025/12/17/when-politics-meets-publishing-researchers-fight-back/ https://thepublicationplan.com/2025/12/17/when-politics-meets-publishing-researchers-fight-back/#respond Wed, 17 Dec 2025 15:05:56 +0000 https://thepublicationplan.com/?p=18549

KEY TAKEAWAYS

  • US government executive orders targeting EDI programmes are prompting federally funded journals to censor demographic data and equity-focused language.
  • Authors and editors are pushing back to ensure data are made available and to maintain the integrity of the scientific record.

Following US government executive orders to end federal equity, diversity, and inclusion (EDI) programmes and to only recognise two sexes, The BMJ has emphasised the importance of retaining sex and gender data in published research. In an article in Undark, Peter Andrey Smith highlights another example of the scientific community pushing back against federal pressure to remove EDI-related data.

Authors make a stand

Smith describes the case of anthropologist Tamar Antin and co-authors, who faced an unusual request from the federally funded journal Public Health Reports following acceptance of their paper on tobacco use. The editors requested removal of the word “equitably” and demographic data, citing compliance with executive orders. Rather than grant the request, Antin and co-authors withdrew their paper entirely and went public. This “act of defiance” was met with widespread support from the scientific community, who argued that removing demographic data doesn’t just affect one paper’s conclusions – it hampers future studies by denying other scientists the opportunity to reanalyse findings or build on existing research.

“Removing demographic data doesn’t just affect one paper’s conclusions – it hampers future studies by denying other scientists the opportunity to reanalyse findings or build on existing research.”

The bigger picture

Smith also shares examples of federally funded researchers requesting:

  • withdrawal
  • removal of authors from bylines
  • specific wording changes

to accepted papers, citing the political landscape. While this affects a minority of submissions directly, maintaining the integrity of the scientific record is paramount.

Looking ahead, the Committee on Publication Ethics’ position statement emphasises that publishing decisions and language choices should not be influenced by politics or government policies, and there is no place for retractions to censor the scientific record.

————————————————

Have the US executive orders around EDI directly impacted your work?

]]>
https://thepublicationplan.com/2025/12/17/when-politics-meets-publishing-researchers-fight-back/feed/ 0 18549
The BMJ pushes back on “anti-gender ideology” https://thepublicationplan.com/2025/04/30/the-bmj-pushes-back-on-anti-gender-ideology/ https://thepublicationplan.com/2025/04/30/the-bmj-pushes-back-on-anti-gender-ideology/#respond Wed, 30 Apr 2025 14:56:51 +0000 https://thepublicationplan.com/?p=17684

KEY TAKEAWAYS

  • A recent instruction from the Trump administration ordered CDC scientists to withdraw articles from scientific journals that include “forbidden terms” related to gender.
  • BMJ editors urge other journals to maintain the integrity of scientific research by resisting “bow[ing] to political or ideological censorship”.

A recent instruction from the Trump administration directed US Centers for Disease Control and Prevention (CDC) scientists to withdraw or retract any submitted (but not yet published) articles that include “forbidden terms” such as gender, transgender, LGBT, or transsexual. In an opinion article published in The BMJ, Jocalyn Clark (International Editor) and Kamran Abbasi (Editor-in-Chief) warn of the dangers of blocking important medical information from publication.

Censoring sex and gender in published research

Clark and Abbasi explain that sex and gender data are critical for understanding differences among populations and individuals from outcome and experience perspectives. The authors emphasise that blocking gender-related data is not only harmful for patients, but compromises the integrity of scientific research as a whole. They believe that attempting to censor these data is a political maneuver based on “anti-gender ideology” and “a return to fundamentalist values”, in line with the recent disappearance of other politically charged content on topics like immunisation and contraception from CDC websites and datasets.

“Blocking gender-related data is not only harmful for patients, but compromises the integrity of scientific research as a whole.”

Violation of publication ethics

Clark and Abbasi highlight several ways in which the instruction breaches publication ethics:

  • Being at odds with the reporting standards adhered to by medical journals, such as the Sex and Gender Equity in Research (SAGER) guidelines.
  • Conflicting with authorship criteria, which not only ensure that authors are credited for their work, but are accountable for it. Removing an author who qualifies for authorship, even at their own request, constitutes ghost writing.
  • “Muzzling” important medical data. Although authors are within their rights to withdraw submitted papers from a given journal prior to publication, the data should still be published.

The authors call upon journal editors to resist the instruction on the grounds that they have a “duty to stand for integrity and equity”, which supersedes any “political or ideological censorship”.

————————————————–

Do you agree that authors and editors complying with the instruction would compromise the integrity of scientific research?

]]>
https://thepublicationplan.com/2025/04/30/the-bmj-pushes-back-on-anti-gender-ideology/feed/ 0 17684
ChatGPT and peer review: risk or revolution? https://thepublicationplan.com/2025/04/01/chatgpt-and-peer-review-risk-or-revolution/ https://thepublicationplan.com/2025/04/01/chatgpt-and-peer-review-risk-or-revolution/#respond Tue, 01 Apr 2025 14:51:13 +0000 https://thepublicationplan.com/?p=17474

KEY TAKEAWAYS

  • AI-generated peer reviews are increasingly common, but they often lack depth and true scientific insight.
  • Responsible AI use can support, but not replace, expert human review, but clear guidelines and transparency are needed to maintain scientific integrity.

A recent article by James Zou in Nature highlights the growing role of AI in peer review, where up to 17% of peer-review comments in a sample of computer-science reviews were AI generated. While tools like ChatGPT can assist with reviewing research papers, they also present challenges that the academic community must address.

The growing use of AI in peer review

Since the rise of ChatGPT in 2022, researchers have observed an increase in AI-generated peer reviews. These reviews are often characterised by a formal, verbose style and often do not refer specifically to the content of the submitted paper. Zou’s study, which analysed 50,000 peer reviews, also found that AI-generated text was more common in last-minute reviews, suggesting that time constraints may drive its use.

Risks and limitations of AI in peer review

While AI can streamline certain peer-review tasks, it cannot replace expert human reviewers. Current large language models (LLMs) struggle with deep scientific reasoning and can often generate misguided assessments or ‘hallucinations’. AI-generated feedback can also lack technical knowledge and may overlook critical methodological flaws. Even when AI tools are used for low-risk applications, such as retrieving or summarising information, they can be unreliable, and all AI outputs should be verified by human reviewers. Platforms like OpenReview, which facilitate interactive discussions between authors and reviewers, offer a promising model for balancing AI assistance with human oversight.

Responsible AI use in peer review

Zou concludes that the adoption of AI in academic publishing is inevitable. Instead of banning AI, the scientific community must establish guidelines for its responsible use.

Instead of banning AI, the scientific community must establish guidelines for its responsible use.

To maintain scientific integrity, journals and conferences should require reviewers to disclose AI usage and develop policies that limit AI’s role to supportive, rather than decision-making, functions. More research is needed to define best practices, ensuring that AI benefits peer review without compromising its core principles.

————————————————–

How should journals handle AI-generated reviews?

]]>
https://thepublicationplan.com/2025/04/01/chatgpt-and-peer-review-risk-or-revolution/feed/ 0 17474
Protecting publications: the fight against misconduct https://thepublicationplan.com/2025/03/18/protecting-publications-the-fight-against-misconduct/ https://thepublicationplan.com/2025/03/18/protecting-publications-the-fight-against-misconduct/#respond Tue, 18 Mar 2025 10:09:05 +0000 https://thepublicationplan.com/?p=17316

KEY TAKEAWAYS

  • Research integrity concerns are growing, with over 10,000 article retractions recorded in 2023.
  • Publishers are investing in tools, training, and investigations to combat misconduct, but collaboration across stakeholders is vital to uphold ethical research practices.

The rise in research integrity concerns is shaping the role of journal publishers, as detailed in a recent Insights article by Sabina Alam, Director of Publishing Ethics and Integrity at Taylor & Francis. With over 10,000 article retractions recorded in 2023, the issue of academic misconduct is growing, prompting publishers to implement stronger safeguards and investigative processes. However, ensuring research integrity is a shared responsibility, requiring active involvement from institutions, funders, and researchers alike.

The evolving challenge of research integrity

The prevalence of cases of misconduct – ranging from unintentional errors to deliberate fraud – has led to the increase in retractions. Among the many challenges publishers face are paper mills, citation manipulation, and AI-generated fraudulent content. However, these represent just a fraction of the evolving unethical practices that threaten academic integrity. As the publishing landscape changes, so too do the methods of bad actors who continuously adapt to bypass safeguards, making it essential for publishers to remain vigilant and responsive to new threats.

Shared responsibility in addressing unethical practices

While publishers are making significant investments in internal processes, training, and investigative teams to detect and address misconduct, there is also a critical need for greater awareness among consumers of scholarly content. Understanding the different types of post-publication notices, such as corrections, retractions, and expressions of concern, is essential for interpreting research validity and credibility.

Understanding the different types of post-publication notices, such as corrections, retractions, and expressions of concern, is essential for interpreting research validity and credibility.

Educating researchers, institutions, and the wider academic community about these notices will help ensure that retracted or questionable research is not inadvertently cited or relied upon in future work. Alam acknowledges initiatives such as United2Act and STM Integrity Hub that are aiming to create industry-wide solutions to prevent fraudulent research from being published in the first place.

As scholarly publishing evolves, the focus on ethics and transparency continues to grow. By strengthening detection mechanisms, enforcing ethical guidelines, and fostering a shared responsibility for ethical publication practices, we can collectively safeguard the credibility of academic research.

After reading the article, click here for a brief survey and to receive your authorization code for your Credit Tracker. This serves as documentation for the activity.

————————————————–

What do you think is the most effective way to address research misconduct?

]]>
https://thepublicationplan.com/2025/03/18/protecting-publications-the-fight-against-misconduct/feed/ 0 17316
Redefining research ethics for a fairer future https://thepublicationplan.com/2025/02/19/redefining-research-ethics-for-a-fairer-future/ https://thepublicationplan.com/2025/02/19/redefining-research-ethics-for-a-fairer-future/#respond Wed, 19 Feb 2025 10:21:53 +0000 https://thepublicationplan.com/?p=17252

KEY TAKEAWAYS

  • In late 2024, the Declaration of Helsinki underwent its most radical update in 60 years, including a revision to protect healthy volunteers.
  • However, critics suggest there is still a way to go and that other aspects of research ethics need to be incorporated, such as how to ensure the benefits of clinical research are felt by trial participants and their communities. 

The World Medical Association (WMA) recently updated a key ethical framework, the Declaration of Helsinki, at a scale not seen since the Declaration’s inception in 1964. As reported by Cathleen O’Grady in Science, the WMA hope that the changes will help to drive new standards in research equity.

“Humans”, not “subjects”, and the importance of healthy volunteers

As outlined by O’Grady, the 2024 revisions, which mark the tenth time the document has been updated, struck a new tone, with the Declaration’s title now referring to “human participants” rather than “human subjects”. The revisions, published in JAMA with accompanying editorial, also include the first ever mention of healthy volunteers, rather than considering only patient participants in research.

The 2024 revisions, which mark the tenth time the document has been updated, struck a new tone, with the Declaration’s title now referring to “human participants” rather than “human subjects”.

Expanded scope

These important steps forward are not the only signs of the Declaration’s expanded scope and ambition. Other changes include:

  • a direction that all those involved in medical research should adopt the Declaration’s principles, not just doctors.
  • a focus on ensuring vulnerable groups are included in medical research. Previous guidance aimed at protecting groups such as pregnant people inadvertently led to their exclusion from clinical trials. The revised Declaration notes that this can exacerbate disparities and that the harms of exclusion and inclusion should both be considered.

Radical, but complete?

The WMA General Assembly unanimously supported the 2024 update, which Dr Ashok Philip, President of the WMA, described as a “landmark revision”. However, as reported by O’Grady, some feel the revisions should have gone even further and that there are still key omissions, namely:

  • Benefits for participants and the wider community: the update does not look at ways to ensure that trial participants and their communities benefit from research.
  • Other types of research: the Declaration’s focus remains medical research, with epidemiological and behavioural studies not yet covered.
  • Data protection: the use of data from insurance or pharmaceutical company databases in research, and related issues of informed consent, are not discussed.

Nevertheless, the Declaration of Helsinki remains a cornerstone of ethical conduct in medical research, and the latest revisions provide an important focus on the dignity of research participants. Chair of the revision workgroup, Dr Jack Resneck Jr, calls on all involved in medical research to uphold these renewed principles.

————————————————–

What do you think is the most important topic to be included in future updates to Declaration of Helsinki?

]]>
https://thepublicationplan.com/2025/02/19/redefining-research-ethics-for-a-fairer-future/feed/ 0 17252
Why are retraction rates rising? https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/ https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/#respond Wed, 24 Jul 2024 10:57:46 +0000 https://thepublicationplan.com/?p=16215

KEY TAKEAWAYS

  • The retraction rate for biomedical science papers with corresponding authors based at European institutions quadrupled between 2000 and 2020.
  • Unreliable data has emerged as a leading reason for retraction, while duplication remains a key factor.

Research misconduct remains a major concern, with increasing efforts dedicated to monitoring retraction rates – and the underlying reasons. An analysis recently published in Scientometrics and discussed in Nature news uncovered a quadrupling of retraction rates since 2000 among biomedical science articles with corresponding authors based at European institutions, from about 11 per 100,000 articles to almost 45 per 100,000 in 2020.

Why are articles retracted?

Fabián Freijedo-Farinas and colleagues reviewed over 2,000 retracted English-, Spanish-, and Portuguese-language articles collated by Retraction Watch to identify underlying reasons. Research misconduct was the most prevalent factor, accounting for 67% of cases, while 16% of retractions were due to honest errors (with no reason provided for the remainder). Research misconduct-related retractions were due to:

Reasons have shifted over time, with authorship and affiliation issues falling from one of the top reasons to joint 5th of 7. Duplication has remained steady as a cause, while retractions due to unreliable data – including bias and lack of original data availability – have skyrocketed. The authors suggest paper mills have a major role to play.

However, it’s not the same story across Europe: of the 4 countries with the most retractions, the proportion of duplication-related retractions has fallen in the UK but substantially increased in Italy and Spain.

Why are retraction rates increasing?

Arturo Casadevall, who identified similar rates of research misconduct-related retractions in a 2012 analysis, commented that the overall hike in retraction rates could be due to authors, institutions, and journals increasingly viewing retraction as the best route to correct the scientific record.

The overall hike in retraction rates could be due to authors, institutions, and journals increasingly viewing retraction as the best route to correct the scientific record.

In addition, publications have increasingly drawn the attention of online sleuths, who may raise concerns with journals, according to research integrity specialist Sholto David. New digital technologies are also making it easier to screen publications for suspicious text or data. Retraction Watch co-founder Ivan Oranksy believes use of plagiarism-detection software could be partially responsible for the increase; looking to the future, tools like image manipulation detectors could mean retraction rates rise further.

After reading the article, click here for a brief survey and to receive your authorization code for your Credit Tracker. This serves as documentation for the activity.

————————————————–

How much do you think increasing use of image manipulation detectors will impact retraction rates?

]]>
https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/feed/ 0 16215
Go with the flowcharts: new tools to help data publishers navigate ethical concerns https://thepublicationplan.com/2023/11/09/go-with-the-flowcharts-new-tools-to-help-data-publishers-navigate-ethical-concerns/ https://thepublicationplan.com/2023/11/09/go-with-the-flowcharts-new-tools-to-help-data-publishers-navigate-ethical-concerns/#respond Thu, 09 Nov 2023 16:17:33 +0000 https://thepublicationplan.com/?p=14602

KEY TAKEAWAYS

  • The FORCE11-COPE Research Data Publishing Ethics Working Group has released 8 new practical flowcharts.
  • These tools guide data publishers through 4 areas in which ethical concerns can arise: authorship and contributorship, legal and regulatory, rigour, and risk.

Following a wave of high-profile retractions in recent years, ethical issues are an ongoing concern in scientific publishing. In an article for Upstream, FORCE11-COPE Research Data Publishing Ethics Working Group’s member Iratxe Puebla presents a series of practical flowcharts, based on the group’s recommendations, which aim to help those dealing with such issues.

A force for change

The FORCE11 Research Data Publishing Ethics Working Group was established in 2021 as a multi-stakeholder effort to support data publishers — including journals and data repositories — in handling ethical concerns raised during the publication process.

Coping with concerns

As Puebla reports, the group collaborated with the Committee on Publication Ethics (COPE) to develop recommendations around 4 key areas of possible ethical concern:

  • authorship and contribution conflicts
  • legal and regulatory restrictions
  • rigour (or completeness of datasets)
  • and potential risks associated with data release.

Following this initial work, the group developed tools to help data publishers to implement the guidance and embed it into everyday practice. As reported by The Scholarly Kitchen, the group recognised that the ‘legwork’ of handling ethical issues is performed by teams at journals and data repositories of varying sizes, with correspondingly variable levels of support and resources. To this end, they produced practical, easy-to-use flowcharts to accompany the recommendations and aid these teams.

Workable ethics

The group produced 8 flowcharts based on the 4 areas of concern identified in the recommendations (a pre- and a post-publication flowchart for each area). These are freely available via the COPE website.

Puebla and the Working Group hope that the flowcharts will be useful to those working in data publishing. They call on the data publishing community to use these tools, to submit any feedback, and to continue to contribute to the discourse around ethical data publishing.

————————————————–

Will your organisation use the new FORCE11-COPE flowcharts when handling ethical concerns relating to data publishing?

]]>
https://thepublicationplan.com/2023/11/09/go-with-the-flowcharts-new-tools-to-help-data-publishers-navigate-ethical-concerns/feed/ 0 14602
Author reports of potential conflicts of interest: room for improvement https://thepublicationplan.com/2023/10/18/author-reports-of-potential-conflicts-of-interest-room-for-improvement/ https://thepublicationplan.com/2023/10/18/author-reports-of-potential-conflicts-of-interest-room-for-improvement/#respond Wed, 18 Oct 2023 13:49:42 +0000 https://thepublicationplan.com/?p=14498

KEY TAKEAWAYS

  • Inaccurate author disclosures continue to be an issue in medical publishing. A recent study shows that most authors fail to report, or under-report, ‘potential conflicts of interest’.
  • The study’s authors call for action from journals to help remove stigma and increase transparency.

The fully transparent disclosure of relationships between authors of scientific research and other stakeholders is paramount to maintaining the credibility of research and upholding public confidence. Nevertheless, inadequacies in reporting practices remain a challenge. A recent study by Dr Mary Guan and colleagues shed more light on current practices through a detailed comparison of author-disclosed ‘potential conflicts of interest’ versus pharma-reported payments to healthcare professionals.

What did the research reveal?

Guan et al. reviewed disclosures from the first, second, and final US authors of 150 clinical manuscripts from the top 3 US rheumatology journals, in articles from January 2019 onwards. The researchers then compared this information with entries in the Open Payments database. The group’s analyses yielded some surprising findings:

  • Disclosures were inaccurate in 92% of papers that involved authors deemed to have ‘potential conflicts of interest’.
  • Of the 135 authors with ‘potential conflicts of interest’, 87% disclosed inaccurately.
  • Where data were available, the total monetary value of undisclosed potential conflicts was found to be nearing $5.2 million. For those that were ‘under-disclosed’, the total value was just above $4.1 million.
  • Among the 14 papers that reported clinical trial data, all authors failed to report a potential conflict of interest and in some cases also under-reported potential conflicts.

So, what can we do to improve reporting accuracy?

In recent years, the International Committee of Medical Journal Editors moved to using the term ‘disclosure of relationships’ rather than ‘conflicts of interest’. This was in part to ensure that guidance was simple for authors to follow in a consistent way: all relationships should be disclosed, and readers draw their own conclusions as to which may constitute a potential conflict of interest. Guan et al. point out that perceived stigma surrounding the term ‘potential conflict of interest’ could also deter authors from accurate reporting, and that a more neutral term may encourage better compliance. Furthermore, they propose that “journals must clearly articulate their reporting expectations and also must clearly emphasise that industry payments do not, a priori, impair the validity of a manuscript”.

“Journals must clearly articulate their reporting expectations and also must clearly emphasise that industry payments do not, a priori, impair the validity of a manuscript”.

————————————————–

Which strategy would be most effective at improving the accuracy of author disclosures?

]]>
https://thepublicationplan.com/2023/10/18/author-reports-of-potential-conflicts-of-interest-room-for-improvement/feed/ 0 14498
No artificial ingredients: Nature takes a stand against generative AI https://thepublicationplan.com/2023/08/03/no-artificial-ingredients-nature-takes-a-stand-against-generative-ai/ https://thepublicationplan.com/2023/08/03/no-artificial-ingredients-nature-takes-a-stand-against-generative-ai/#respond Thu, 03 Aug 2023 08:19:58 +0000 https://thepublicationplan.com/?p=14261

KEY TAKEAWAYS

  • Nature has banned the inclusion of AI-generated visual content in publications due to concerns about transparency, attribution, privacy, and copyright.
  • The decision aims to protect research integrity, preserve content creators’ rights, and strengthen the fight against false information.

Nature has decided to prohibit the use of artificial intelligence (AI) in the creation of visual content, including photographs, illustrations, and videos. The decision comes amidst ever-increasing use of generative AI tools, such as ChatGPT and Midjourney, and the discussions about their possible benefits and limitations.

As outlined in a recent Editorial, the decision was driven by 5 primary concerns:

  • Inability to verify: existing AI tools do not disclose their sources, making it impossible to verify the accuracy and authenticity of the generated content.
  • Lack of attribution: generative AI tools do not properly acknowledge the creators of the works they use or reference, disregarding the essential practice of giving credit where it is due.
  • Violation of copyright laws: generative AI systems often rely on copyrighted works without proper authorisation, infringing upon the rights of artists and content creators.
  • Invasion of privacy: generative AI tools can be used to create realistic depictions of individuals (such as ‘deepfakes’) without their explicit consent, leading to ethical issues and potential legal consequences.
  • Spread of misinformation: the generation of ‘deepfakes’ contributes to the dissemination of false information, creating confusion and eroding trust in authentic data.

While the journal will continue to allow the use of generative AI in text creation, subject to proper documentation and attribution, generative AI tools will not be accepted as credited authors on research papers.

Nature’s firm stance against AI-generated visuals underscores the need to weigh the creative benefits of generative AI tools against the potential risks to research integrity and content creators.

Nature’s firm stance against AI-generated visuals underscores the need to weigh the creative benefits of generative AI tools against the potential risks to research integrity and content creators. As the scientific community embraces AI, it will be crucial to establish ethical frameworks and transparent practices to ensure responsible use of this technology while upholding the principles of authenticity and creativity.

————————————————–

What do you think – should generative AI be allowed in the creation of visual content for research publications?

]]>
https://thepublicationplan.com/2023/08/03/no-artificial-ingredients-nature-takes-a-stand-against-generative-ai/feed/ 0 14261
Society-first science: 10 rules for responsible research https://thepublicationplan.com/2023/08/01/society-first-science-10-rules-for-responsible-research/ https://thepublicationplan.com/2023/08/01/society-first-science-10-rules-for-responsible-research/#respond Tue, 01 Aug 2023 14:14:19 +0000 https://thepublicationplan.com/?p=14234

KEY TAKEAWAYS

  • Researchers have outlined 10 actionable rules to help scientists prevent inadvertent harm to society caused by their studies.
  • The guidance includes seeking diverse perspectives, using sensitive language, reporting data and study limitations transparently, ensuring accurate interpretation of findings, and addressing criticism respectfully.

Scientific research holds great potential to bring about positive changes in society. However, it can also inadvertently cause harm to individuals or social groups by reinforcing stereotypes, biases, or negative perceptions. Researchers often lack the necessary training and tools to consider and minimise such negative impacts of their studies. To address this issue, Dr Niv Reggev and a team of international colleagues have developed 10 rules for socially responsible science, recently published in PLOS Computational Biology:

  1. Get diverse perspectives early on: when appropriate, gather input from members of marginalised groups to gain crucial insights into their experiences.
  2. Understand the limits of your design with regard to your claims: anticipate and acknowledge the potential limitations of your study in advance to avoid flawed conclusions.
  3. Incorporate underlying social theory and historical events: recognise the importance of social context to ensure correct interpretations.
  4. Be transparent about your hypothesis and analyses: pre-register study protocols and analyses to inspire confidence in the findings and minimise the risk of drawing incorrect conclusions.
  5. Report your results and limitations accurately and transparently: avoid oversimplification and share data and analyses openly to enable peer review and replication.
  6. Choose your terminology carefully: use sensitive and inclusive language to avoid reinforcing stereotypes, and seek feedback from the communities being researched.
  7. Seek a rigorous review and editorial process: collaborate with rigorous review and editorial processes to ensure accuracy and enhance public confidence in the results.
  8. Play an active role in ensuring correct interpretations of your results: collaborate with university or journal press offices to avoid sensationalised or inaccurate press releases.
  9. Address criticism from peers and the general public with respect: thoughtfully review and respond to criticism, fostering constructive dialogue.
  10. When all else fails, consider submitting a correction or a self-retraction: rectify any flaws identified post-publication to uphold the integrity of the scientific record.

The authors emphasise that these rules are not meant to be prescriptive but rather serve as guidance to assist scientists in considering the social impact of their studies. By following them, scientists can be empowered to conduct research that not only generates valuable knowledge but also upholds ethical standards, fosters inclusivity and transparency, and avoids detrimental societal consequences.

—————————————————–

Should researchers undergo specific training on considering and minimising potential societal harm caused by their studies?

]]>
https://thepublicationplan.com/2023/08/01/society-first-science-10-rules-for-responsible-research/feed/ 0 14234