Recommendations – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 01 Oct 2025 08:25:25 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Recommendations – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Wiley develops AI guidelines in response to demand from researchers https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/ https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/#respond Wed, 01 Oct 2025 08:25:23 +0000 https://thepublicationplan.com/?p=18314

KEY TAKEAWAYS

  • Wiley embraces a future-looking AI policy with guidelines on responsible and ethical use, with human oversight, to ensure the integrity of publications.
  • The guidelines also provide tips on how AI can be used, effective prompt engineering, and choosing the best AI tools for the project.

Artificial intelligence (AI) is becoming more widely adopted within scientific publishing, yet many authors remain unsure how to use it effectively while maintaining the integrity of their research. Highlighted by an article in Research Information, Wiley have released AI guidelines for book authors in response to findings that ~70% of researchers want publisher guidance on using AI.

The guidelines include:

  • Reviewing terms and conditions: authors should regularly review terms and conditions to ensure that their chosen AI technology does not claim ownership over the content or limit its use.
  • Maintaining human oversight: AI should assist but not replace authors. Authors must take full responsibility for their work and review any AI-generated content before submission.
  • Disclosing AI use: authors should document all AI use, including its purpose and impact on findings, and describe how AI-generated content was verified.
  • Ensuring protection of rights: authors must ensure that the AI used (or its provider) does not gain rights over the authors’ material, including for the purposes of training the AI.
  • Using AI responsibly and ethically: authors must comply with data protection laws, avoid using AI to copy the style or voice of others, fact-check the accuracy of AI-generated content, and be mindful of potential biases.

The guidance also provides recommendations on how to write prompts and select AI tools, as well as suggestions on use cases for authors newer to AI:

  • analysing research and recognising themes across sources
  • exploring ways to simplify complicated topics
  • adapting work so it is relatable for different audiences
  • polishing work by refining language and checking for consistency.

The guidelines complement Wiley’s existing generative AI framework for journal publications. As stated by Jay Flynn (Wiley EVP & General Manager, Research & Learning), “writers and researchers are already using AI tools, whether publishers like it or not. At Wiley, we’d rather embrace this shift than fight it”.

“Writers and researchers are already using AI tools, whether publishers like it or not. At Wiley, we’d rather embrace this shift than fight it”
– Jay Flynn, Wiley EVP & General Manager, Research & Learning

—————————————————

What do you think – should publishers give authors more guidance on how to use AI appropriately?

]]>
https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/feed/ 0 18314
Why aren’t more journals publishing plain language summaries? https://thepublicationplan.com/2025/05/08/why-arent-more-journals-publishing-plain-language-summaries/ https://thepublicationplan.com/2025/05/08/why-arent-more-journals-publishing-plain-language-summaries/#respond Thu, 08 May 2025 16:36:17 +0000 https://thepublicationplan.com/?p=17720

KEY TAKEAWAYS

  • Most journals surveyed do not allow authors to submit PLS, often citing a perceived lack of demand from readers or authors.
  • Existing PLS practices are inconsistent in format, peer review processes, and indexing methods.

Plain language summaries (PLS) have the power to unlock science for everyone – so why are they still missing from many medical journals? A recent article by Slávka Baróniková and colleagues, published in European Medical Writers Association (EMWA)’s journal Medical Writing, presents the results of a survey conducted by Open Pharma in 2022–2023. The survey explored how journal editors and publishers view the role of PLS in scientific publishing and whether current practices align with Open Pharma’s recommendations for clear and accessible research communication.

73% of journals surveyed did not allow author-submitted PLS, citing reasons such as a perceived lack of reader or author demand, lack of relevance to journal content, and insufficient resources.

The 16-question survey gathered responses from 29 individuals across 26 individual journals and 7 publisher portfolios. Here are the main findings:

  • Most journals do not support PLS submission: 73% of journals surveyed did not allow author-submitted PLS.
  • PLS practices are inconsistent: Among journals that did accept PLS, formats, placement, peer review, and indexing practices varied widely.
  • Peer review and discoverability are limited: Fewer than half of the journals that published PLS peer reviewed them or used appropriate PubMed metatags. Only one journal reported consistent use of the PLS metatag, which is crucial for indexing.
  • Perceived barriers include lack of demand: Common reasons for not accepting PLS included a perceived lack of reader or author demand, lack of relevance to journal content, and insufficient resources.
  • Most journals recognise the potential for PLS to increase readership: Patients, healthcare professionals, and students were seen as key audiences for PLS.

Despite progress by some publishers, the survey highlights an ongoing need for greater standardisation, more consistent peer review, and improved visibility of PLS. It also revealed that some respondents were unsure of their own journal’s PLS policies, underscoring the need for better internal communication and training.

The authors urge journals to adopt Open Pharma’s recommendations and strengthen their PLS policies to ensure that PLS are accessible, discoverable, and scientifically accurate.

————————————————–

What do you think – should plain language summaries be peer reviewed?

]]>
https://thepublicationplan.com/2025/05/08/why-arent-more-journals-publishing-plain-language-summaries/feed/ 0 17720
Are conflicts of interest reported transparently in healthcare guidelines? https://thepublicationplan.com/2024/11/07/are-conflicts-of-interest-reported-transparently-in-healthcare-guidelines/ https://thepublicationplan.com/2024/11/07/are-conflicts-of-interest-reported-transparently-in-healthcare-guidelines/#respond Thu, 07 Nov 2024 08:24:35 +0000 https://thepublicationplan.com/?p=16753

KEY TAKEAWAYS

  • RIGHT-COI&F guides transparent reporting of COIs and funding in healthcare guidelines and policy documents of guideline organisations.
  • The checklist can also be used to assess the quality and completeness of reporting in published guidelines.

Healthcare guidelines substantially influence clinical practice and policy and are developed through extensive analysis and decision-making. Amid broader issues with accurate disclosure in medical publishing, a recent Annals of Internal Medicine article by Yangqin Xun and colleagues highlighted that while guidelines are especially sensitive to conflicts of interest (COIs) and funder influence, disclosure is generally poor.

Clear and complete reporting of COIs and funding is crucial for credibility and is monitored as a key open science indicator. Yet existing checklists, such as Reporting Items for practice Guidelines in HealThcare (RIGHT), often lack detail on how to report COIs and funding. Xun et al. aimed to address this, building on RIGHT to develop a COI- and funding-specific extension. RIGHT-COI&F can be used both while developing healthcare guidelines and to assess completeness of COI and funding reporting.

RIGHT-COI&F can be used both while developing healthcare guidelines and to assess completeness of COI and funding reporting.

Checklist development

RIGHT-COI&F development followed the recommendations of the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, based on a published protocol. Key steps were:

  • establishing working groups, including an expert panel
  • generating an initial checklist based on existing materials and a stakeholder survey
  • agreeing checklist items through surveying experts and consensus meetings
  • refining and testing the checklist.

RIGHT-COI&F items: policy and implementation

RIGHT-COI&F has 27 items, 18 focused on COIs and 9 on funding. Most items are related to policy and include:

  • defining the types of interest to be disclosed (eg, based on relevance, financial amount, or time period) and by whom
  • how accuracy and completeness is verified
  • processes for determining whether interests are conflicts
  • strategies to manage COIs
  • whether accepting funding from certain sources is restricted.

Organisational policies may fulfil these items, alleviating the need for detailed descriptions in individual guidelines.

The remaining items relate to implementation in individual projects, such as ensuring that declared interests are reported in detail, alongside the funding received (and the role of funders).

Next steps

To promote adoption, the authors plan to translate RIGHT-COI&F into multiple languages, disseminate it through academic networks, and seek endorsement by medical journals. Further assessment of real-life feasibility and impact is planned. We look forward to seeing how RIGHT-COI&F helps uphold transparency and trust in the healthcare space.

————————————————–

What do you think – will the RIGHT-COI&F checklist improve the transparency and credibility of guidelines?

]]>
https://thepublicationplan.com/2024/11/07/are-conflicts-of-interest-reported-transparently-in-healthcare-guidelines/feed/ 0 16753
Publisher policies on AI use: is it time for change? https://thepublicationplan.com/2024/10/10/publisher-policies-on-ai-use-is-it-time-for-change/ https://thepublicationplan.com/2024/10/10/publisher-policies-on-ai-use-is-it-time-for-change/#respond Thu, 10 Oct 2024 15:12:10 +0000 https://thepublicationplan.com/?p=16478

KEY TAKEAWAYS

  • The increasing use of AI tools in academic publishing calls for policies that keep pace with the myriad ways that authors and researchers use AI.
  • An AI risk register that looks at specific risks inherent in individual tools and the ways they are used, plus collaboration among publishers to create standardised guidance, could be the key.

Protecting the integrity of the scientific record becomes more challenging as the role of AI in academic publishing expands. In a recent article for The Scholarly Kitchen, Avi Staiman expresses his concerns about the lack of adequate publisher policies on AI use and sets out  what publishers could do to step up their game.

Where do current policies come up short?

Staiman reports that while authors are eager to implement AI, most lack the expertise to navigate its full potential while protecting research integrity. For instance, Oxford University Press (OUP) reported that 76% of researchers use AI in their research, but 72% are also unaware of their institution’s policies on AI.

76% of researchers use AI in their research, but 72% are also unaware of their institution’s policies on AI.

Alongside this, publishers’ struggles to keep up to date with the latest developments in AI hamper the development of suitable guidelines. Limitations of current policies include:

  • lack of clarity on the roles of authors versus AI in individual cases (for example, who created the content vs who refined it)
  • failure to consider the wide range of available AI tools and their differing uses (substantive vs non-substantive AI use)
  • oversimplified AI policies that equate to blanket disclosure statements on the use of AI only, rather than looking at what was used and how.

Staiman argues that, given the diversity of AI tools that now exist — from those capable of performing statistical analysis, such as JuliusAI, to those assisting with literature searches, like Scite — the ways in which we tackle transparency and regulation need to evolve.

How can publisher AI policies keep pace with AI technology?

To this end, and inspired by the EU AI Act, Staiman suggests formulating an ‘AI risk register’ that assigns  AI tools a level of regulation that matches both the potential risk inherent in that tool and the way it is being used in research. He also recommends 8 practical actions for publishers:

  1. Develop standardised guidelines
  2. Update guidelines continuously
  3. Establish transparent and inclusive governance
  4. Boost learning on AI within individual organisations
  5. Assign different risk levels to AI tools
  6. Classify AI tools based on the type of use level of verification required
  7. Define clear roles for authors and AI
  8. Consider how to monitor and enforce AI policies

Staiman calls upon publishers to rapidly collaborate so that AI policies can keep pace with the fast-moving changes in AI technology.

————————————————–

What do you think – are current publisher policies on AI use robust enough to ensure research integrity?

]]>
https://thepublicationplan.com/2024/10/10/publisher-policies-on-ai-use-is-it-time-for-change/feed/ 0 16478
Overcoming bias in ‘overviews of reviews’: a spotlight on appraisal tools https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/ https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/#respond Tue, 30 Jul 2024 13:10:50 +0000 https://thepublicationplan.com/?p=16233

KEY TAKEAWAYS

  • ‘Overviews of systematic reviews’ are a feature of evidence-based decision making, but are only as strong as the individual reviews they include. Evaluating potential biases and the methodological quality of systematic reviews is therefore crucial.
  • A recent article examines 2 recommended systematic review assessment tools, AMSTAR-2 and ROBIS. While both have value, their use requires proper training, time, and know-how.

Synthesising evidence from multiple systematic reviews (also known as conducting an umbrella review or  ‘overview of reviews’) can form a key part of evidence-based decision making and treatment guidelines. However, conducting effective ‘overviews of reviews’ requires careful planning to minimise bias, which can be present at either a primary study or individual review level. In a recent BMJ Medicine methods primer, Carole Lunny and colleagues address the challenges of assessing and reporting bias in systematic reviews. The group offer a detailed examination of AMSTAR-2 and ROBIS, two recommended appraisal tools, and provide practical guidance for authors of ‘overviews of reviews’.

AMSTAR-2 versus ROBIS

The group compared key features of each tool.

AMSTAR-2:

  • 16-item checklist
  • focuses on the methodological quality of systematic reviews of healthcare interventions, including risk of bias
  • reportedly favoured for its quick and easy-to-use format
  • may be preferred for broad assessment of systematic review quality.

ROBIS:

  • domain-based tool
  • 19 items, aimed at identifying biases in systematic reviews
  • useful for pinpointing concerns in review conduct and assessing relevance
  • requires “more thoughtful assessment and time”
  • may be preferred for more nuanced assessments, or comparisons of risk of bias across multiple types of systematic reviews.

Standardising ‘overviews of reviews’

The authors call for a standardised approach to ‘overviews of reviews’ to enhance their credibility and value.

Regardless of the appraisal tool used, the authors call for a standardised approach to ‘overviews of reviews’ to enhance their credibility and value. They outline several key recommendations:

  • Report methodological quality or bias by item, domain, and overall judgement, focusing on outcomes.
  • Discuss risk of bias for each outcome.
  • Highlight any individual review methodological quality issues or potential biases as limitations of the ‘overview of reviews’.
  • Use ROBIS to subgroup reviews by risk of bias, identifying overemphasised findings and excluding high-risk reviews.

An expanding toolkit

Previously, the launch of  PRISMA-S provided much-needed guidance on reporting literature searches within systematic reviews, and Cochrane’s Hilda Bastian proposed solutions to ensure that systematic review protocols were robust. Now, Lunny and colleagues’ primer, and the tools therein, sit alongside initiatives from the LATITUDES Network to form part of a drive to reduce bias in evidence synthesis.

————————————————–

Do you use a specific tool(s) when synthesising evidence from systematic reviews?

]]>
https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/feed/ 0 16233
ICMJE recommendations update 2024: what’s new and what’s next? https://thepublicationplan.com/2024/04/02/icmje-recommendations-update-2024-whats-new-and-whats-next/ https://thepublicationplan.com/2024/04/02/icmje-recommendations-update-2024-whats-new-and-whats-next/#respond Tue, 02 Apr 2024 13:02:35 +0000 https://thepublicationplan.com/?p=15481

KEY TAKEAWAYS

  • Key updates to the ICMJE Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals include guidance on the use of AI by authors, editors, and reviewers.
  • Other important updates include statements on fair authorship assignment, sustainability goals, funding support declarations, and protection of research participants.

The International Committee of Medical Journal Editors (ICMJE) recently updated its Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Key updates provide guidance on appropriate authorship of research carried out in low- and middle-income countries (LMICs) and the use of artificial intelligence (AI) in generating and reporting data. The latest recommendations and an annotated version of the previous recommendations are both freely available on the committee’s website, and a summary of all updates is provided below.

  • Authorship: local investigators should be included as authors on publications reporting data from LMICs. As well as ensuring fairness, local author contributions provide additional context on the implications of research.
  • Use of AI (authors): if AI is used to provide writing assistance, this should be clearly stated in the article acknowledgements. The use of AI by researchers to help collect data or generate figures should be noted in the methods.
  • Use of AI (editors and reviewers): journal editors should be aware of potential confidentiality concerns if AI is used in the review process. Reviewers must request permission from the journal before using AI assistance.
  • Carbon emissions: all stakeholders in medical publishing should collaborate to work towards net zero carbon emissions.
  • Acknowledging funding support: funding statements should relate directly to the work being reported, for example: “This study was funded by A; Dr. F’s time on the work was supported by B.” Other potential conflicts of interest and general funding support should be included in the disclosures section.
  • Protection of research participants: authors should be prepared to provide approval documentation for their study if requested by editors.
  • Citations: wherever possible, cited references should be published articles rather than abstracts.

In an editorial published in Cureus, Sankalp Yadav takes a detailed look at the evolution of the recommendations and their impact on medical publishing, describing the latest updates as a “beacon of ethical guidance in the ever-evolving domain of biomedical research and publishing”. Yadav also discusses some of the ongoing challenges in implementing the ICMJE guidance, such as the promotion of fair and ethical authorship practices and keeping pace with new developments – something that may be particularly true for AI and its increasing impact across all areas of medical research and publishing.

If AI is used to provide writing assistance, this should be clearly stated in the article acknowledgements.

————————————————

Which aspect of the updated ICMJE recommendations do you believe will have the most positive impact on the quality and integrity of medical publications?

]]>
https://thepublicationplan.com/2024/04/02/icmje-recommendations-update-2024-whats-new-and-whats-next/feed/ 0 15481
SPIRIT-Outcomes 2022 Extension: guidelines for transparent inclusion of clinical trial outcomes in trial protocols https://thepublicationplan.com/2023/06/28/spirit-outcomes-2022-extension-guidelines-for-transparent-inclusion-of-clinical-trial-outcomes-in-trial-protocols/ https://thepublicationplan.com/2023/06/28/spirit-outcomes-2022-extension-guidelines-for-transparent-inclusion-of-clinical-trial-outcomes-in-trial-protocols/#respond Wed, 28 Jun 2023 09:35:31 +0000 https://thepublicationplan.com/?p=14104

KEY TAKEAWAYS

  • The SPIRIT-Outcomes 2022 Extension addresses the need for better guidance in describing outcomes in clinical trial protocols.
  • The new extension should be used in conjunction with the original SPIRIT 2013 Statement to improve transparency and integrity of clinical trial reporting.

The Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT)-Outcomes 2022 Extension aims to provide harmonised, evidence- and consensus-based outcome reporting standards for clinical trial protocols. An addition to the existing suite of extensions supporting the original SPIRIT 2013 Statement, the latest extension was recently published in JAMA by Dr Nancy Butcher and colleagues.

The SPIRIT-Outcomes 2022 Extension recommends nine items that should be included in trial protocols, regardless of trial design and population, to define and justify the chosen primary, secondary and other outcomes of the trial. The nine items cover:

  • defining and justifying the chosen study outcomes, including the timing of their measurement and what constitutes a minimal important difference for use in sample size calculations
  • describing who will assess the outcome, and the responsiveness of the chosen outcome assessment tools in the target population
  • describing any statistical methods to account for multiplicity in the analysis or interpretation of the primary and secondary outcomes.

Developed using the EQUATOR methodology for developing reporting guidelines, the authors highlight that these nine checklist items define the minimally required outcome-specific information that should be included in a trial protocol in addition to the items included in the SPIRIT 2013 Statement. The SPIRIT-Outcomes 2022 Extension was developed in parallel with the associated CONSORT-Outcomes 2022 Extension and the authors noted how the two should be used together to maximise their utility in improving the transparency of clinical trial reporting and addressing selective nonreporting of trial results.

The SPIRIT-Outcomes 2022 Extension should be used in conjunction with the CONSORT-Outcomes 2022 Extension to improve the transparency of clinical trial reporting and address selective nonreporting of trial results.

An important aspect in the development of both extensions was representation from patient and public partners on the expert panels. The authors call for such patient and public partnerships when developing future guidelines to bring in a broader range of perspectives, which will ultimately benefit patient-centricity.

The SPIRIT-Outcomes 2022 Extension guidelines are hoped to be particularly useful to trial protocol authors, ethics review boards and journal editors. The authors urge the use of these guidelines for transparency and integrity in trial reporting, and welcome feedback from users of the extension to inform future updates.

—————————————————–

Do you think the new SPIRIT-Outcomes 2022 Extension will improve the quality of clinical trial protocol reporting?

]]>
https://thepublicationplan.com/2023/06/28/spirit-outcomes-2022-extension-guidelines-for-transparent-inclusion-of-clinical-trial-outcomes-in-trial-protocols/feed/ 0 14104
CONSORT-Outcomes 2022 Extension: guidelines for complete reporting of clinical trial outcomes https://thepublicationplan.com/2023/04/18/consort-outcomes-2022-extension-guidelines-for-complete-reporting-of-clinical-trial-outcomes/ https://thepublicationplan.com/2023/04/18/consort-outcomes-2022-extension-guidelines-for-complete-reporting-of-clinical-trial-outcomes/#respond Tue, 18 Apr 2023 13:43:03 +0000 https://thepublicationplan.com/?p=13584

KEY TAKEAWAYS

  • The CONSORT-Outcomes 2022 Extension adds 17 outcome-specific items to improve reporting of randomised clinical trials.

The original Consolidated Standards of Reporting Trials (CONSORT) 2010 statement details the items that should be included when reporting the results of randomised clinical trials (RCT). However, trial outcomes are still often inconsistently or incompletely reported. To address this need, details of an extension to CONSORT that outlines the essential outcome-specific information applicable to all outcome types, populations, and trial designs, CONSORT-Outcomes 2022, have been published in JAMA.

In addition to the original 25-item CONSORT checklist, the CONSORT-Outcomes 2022 Extension includes 17 new outcome-specific items that should be reported for all RCTs:

  • The rationale for selecting the domain of the primary outcome
  • for each outcome, a description of the measurement variable, analysis metric (eg, change from baseline), aggregation method, and the assessment time point
  • if applicable, a definition and rationale for the minimal important change for the primary outcome
  • the cut-off values used if continuous outcome data were analysed as categorical data
  • for assessments performed at multiple time points, the time points used for the outcome analysis
  • definitions of all individual components of composite outcomes
  • identification of outcomes not prespecified in the trial protocol or registry
  • description of the study instruments used to evaluate the outcome, their reliability, validity, and responsiveness in a similar population
  • description of the person who assessed the outcome and qualifications or training necessary to administer the study instruments
  • description of processes used to ensure the quality of outcome data during and after data collection, or inclusion of a statement on where this is available
  • definition and justification of the target difference between treatment groups
  • description of methods used to account for multiplicity in the analysis or interpretation of the primary and secondary outcomes
  • description and rationale for outcome data exclusion criteria for any outcome, or a statement that no outcome data were excluded
  • description of methods used to handle and evaluate missing data
  • definition of the outcome analysis population in relation to trial protocol nonadherence
  • inclusion of results for all prespecified outcome analyses or a statement where these results are available
  • justification for performing any analyses that were not prespecified.

The authors of the extension believe the inclusion of these items in all RCT reports will help to further enhance the value, reproducibility, and transparency of RCTs and reduce the problem of selective reporting of trial results.

After reading the article, click here for a brief survey and to receive your authorization code for your Credit Tracker. This serves as documentation for the activity.

—————————————————–

How likely are you to use the CONSORT-Outcomes Extension checklist?

]]>
https://thepublicationplan.com/2023/04/18/consort-outcomes-2022-extension-guidelines-for-complete-reporting-of-clinical-trial-outcomes/feed/ 0 13584
To tweet or not to tweet: social media guidance to help pharmaceutical companies adhere to the ABPI Code of Practice https://thepublicationplan.com/2023/03/07/to-tweet-or-not-to-tweet-social-media-guidance-to-help-pharmaceutical-companies-adhere-to-the-abpi-code-of-practice/ https://thepublicationplan.com/2023/03/07/to-tweet-or-not-to-tweet-social-media-guidance-to-help-pharmaceutical-companies-adhere-to-the-abpi-code-of-practice/#respond Tue, 07 Mar 2023 10:34:23 +0000 https://thepublicationplan.com/?p=13349

KEY TAKEAWAYS

  • PMCPA has provided detailed guidance on how pharmaceutical companies should use social media.
  • Pharmaceutical companies are free to use social media but must ensure their communications and interactions are compliant with regulations.

UK pharmaceutical companies cannot advertise prescription-only medicines to the public or promote a drug before it has regulatory approval. This activity is largely self-regulated, with companies signing up and abiding by The Association of the British Pharmaceutical Industry (ABPI) Code of Practice (ABPI Code). The system is long-established but has not to date addressed the use of social media by pharmaceutical companies in detail. New guidance has now been released by The Prescription Medicines Code of Practice Authority (PMCPA), who administer the ABPI Code. The PMCPA guide provides practical advice to pharmaceutical companies on how they can use social media in compliance with the ABPI Code.

The guidance includes some overarching considerations and also detailed guidance on the following social media activities:

  • linking to information within social media posts
  • mentioning of other accounts
  • hashtags and tagging
  • responding to misinformation
  • signposting vs posting/sharing/re-sharing
  • posting corporate news and announcements
  • constructing compliant social media profiles and job advertisements
  • increasing disease awareness for the public
  • providing patient support
  • advertising meetings and events
  • announcing product and pipeline milestones
  • working with social media influencers
  • promoting to health professionals and other relevant decision makers
  • recruiting clinical trial participants.

The guidance makes it clear that all material disseminated and activities carried out by a pharmaceutical company, its employees (even through personal accounts if they relate to professional matters), or any third parties acting on its behalf on any social media channel may come within the scope of the ABPI Code and is the responsibility of the pharmaceutical company. This guidance represents the first of its kind in the UK and it is hoped that it will address some of the compliance challenges that have been faced by UK pharmaceutical companies in the absence of clear and codified guidance. To learn more, why not watch the recent MedComms Networking Webinar on the subject or review the PMCPA’s resources for pharmaceutical companies?

—————————————————–

Do you think the new guidance goes far enough to ensure pharmaceutical companies comply with the ABPI Code?

]]>
https://thepublicationplan.com/2023/03/07/to-tweet-or-not-to-tweet-social-media-guidance-to-help-pharmaceutical-companies-adhere-to-the-abpi-code-of-practice/feed/ 0 13349
ICMJE recommendations update: what you need to know https://thepublicationplan.com/2022/11/15/icmje-recommendations-update-what-you-need-to-know/ https://thepublicationplan.com/2022/11/15/icmje-recommendations-update-what-you-need-to-know/#respond Tue, 15 Nov 2022 16:56:56 +0000 https://thepublicationplan.com/?p=12549

KEY TAKEAWAYS

  • The ICMJE has updated their Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals.
  • Key update stipulates that results can be published in medical journals even if they have been previously included in health technology assessment or medical regulators’ reports.

The International Committee of Medical Journal Editors (ICMJE) Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals were updated in May 2022. An annotated PDF of the recommendations is freely available and can be found here.

The ICMJE Recommendations guide accurate and unbiased reporting of industry-sponsored clinical trial data in medical journals. The two key updates in this version concern:

  • Duplicate publication: results or data contained in reports published by health technology assessment agencies or regulatory agencies are no longer considered to be duplicate publications.
  • Reporting of trial participants: authors should discuss how representative the study sample is of the larger population of interest; in cases where race or ethnicity data were not collected, authors should explain why not.

The guidance has been welcomed by industry campaigners, who in January 2022 wrote an open letter to ICMJE, requesting an update to the recommendations on duplicate publications to allow more rapid dissemination of data to the public and easier development of economic evaluations and indirect comparison analyses.

We look forward to seeing these recommendations implemented and hope the updated guidance will contribute to more timely and transparent reporting of medical research.

—————————————————–

Do you agree with the updates to the ICMJE recommendations?

]]>
https://thepublicationplan.com/2022/11/15/icmje-recommendations-update-what-you-need-to-know/feed/ 0 12549