Site icon The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning

Meeting report: summary of the 58th EMWA Autumn Hybrid Conference – Publisher perspectives on artificial intelligence

The 58th European Medical Writers Association (EMWA) Conference began with a series of Hybrid Conference days where attendees learned about the latest trends in reporting and conducting real-world evidence (RWE) studies, explored best practices for medical translation, and heard from journal publishers about how to navigate the challenges and opportunities presented by artificial intelligence (AI) in medical writing. The Publication Plan were in attendance to summarise the sessions for the benefit of those who were unable to attend, and as a timely reminder of the key topics for those who did. You can read our summary of the session on Publisher Perspectives on AI below.

Our summaries of the RWE Medical Translation symposia can be found here.

Publisher perspectives on AI: Empowering medical writers for the future


KEY TAKEAWAYS

  • The field of AI is constantly evolving, meaning updated guidance from publishers is needed.
  • While new AI tools and resources are continually being developed, human review is always required alongside their use.
  • Authors should declare AI use according to journal guidelines, and AI tools should not be listed as a co-author as they cannot take any responsibility.

This session provided a key insight into the evolving landscape of AI in scientific publishing, from the perspective of both publishers and journal editors. The panel presented the latest guidance for authors on AI usage, the perspectives of publishers on integrating AI into their workflows, and the attitudes of healthcare professionals (HCPs) towards the use of AI.

The session began with an introduction from co-chairs Andrea Bucceri and Martin Delahunty. Delahunty cast minds back to November 2022 when ChatGPT first hit the scene, recalling the positive reaction from both the medical writing and publishing community. Summarising the goal of the session, Annette Flanagin (Journal of the American Medical Association; JAMA) established the need to distinguish the good, the bad, and the ugly elements of AI in the scientific publishing sphere.

“How can we differentiate the good, the bad, and the ugly?” – Annette Flanagin

Generative AI publishing policies

Kelly Ann Soldavin (Taylor & Francis Group) discussed generative AI publishing policies and real-world uses of AI in publications. One of the key highlights from publisher and journal policies, as well as publisher organisations, was that generative AI cannot be listed as an author and that the use of generative AI must be disclosed. Authors are ultimately responsible for the originality, validity, and integrity of their content. Overall guidance for disclosing the use of generative AI includes:

  • AI should be disclosed in the methods, acknowledgements, and/or cover letter
  • content created or modified should be disclosed
  • the name and version of the AI tool and how the AI tool was used should be clearly stated
  • authors are encouraged to include the original input prompts and outputs in the submission or supplementary materials.

Guidelines on the use of AI-generated text and images vary for different publishers and individual publishers have their own policies on how AI tools can be used. Generally, appropriate use of generative AI by authors includes:

  • idea generation and exploration
  • language improvement
  • interactive online search
  • coding assistance
  • aid in analysis of data.

Having specific guidance for different areas of publishing is important, with many policies including separate guidance for editors and peer reviewers. This is illustrated in the AI guidelines at Taylor & Francis. As a continually evolving field, Taylor & Francis are considering potential updates to their AI policy in 2025, with wider updates expected from other publishers, organisations and governments.

“If you are not sure, talk to your editor, talk to the journal.” – Kelly Ann Soldavin

The JAMA perspective

Flanagin further discussed the use of AI from a publisher’s perspective, noting how AI-related publications have increased in the last year, and notably, cases where ChatGPT was listed as a co-author. In line with most publisher policies, the policy at JAMA specifies that AI cannot be accountable for authorship.

“Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.” – Annette Flanagin

At JAMA, there are clear instructions for authors on the use of AI, and authors are asked to disclose if AI was used in the manuscript submission form. In 2023/24, ~1.6% of authors disclosed AI use, most commonly:

  • for language, grammar, and translation
  • to reduce the word count for titles/abstracts
  • to identify the focus of a study/intervention.

AI is not used to make editorial decisions; however, a collection of AI-like tools are used to aid editor assessments and improve process efficiencies. These tools still require human review and oversight for accuracy, and examples include:

  • checking image integrity
  • determining similarities in manuscripts
  • recommending peer reviewers based on keywords.

JAMA are asking reviewers not to use AI for peer review, particularly as material is confidential. Only 0.6% of reviewers currently acknowledge using AI, mainly for language issues or to see if a statistical test was appropriate for the study design.

Although it is unusual for publishers to frequently update their policies, JAMA updated their policy in March 2023, which reflects how rapidly AI is progressing and further changes may still be expected.

The Lancet perspective

Jessamy Bagenal (The Lancet) continued the session, noting key similarities and differences in the uses and guidelines for AI at The Lancet:

  • authors are asked at submission to disclose whether AI was used and what it was used for; compared with 0.6% of authors at JAMA, initial results suggest that 6% of authors say they have used AI in some format at The Lancet
  • generative AI is limited to the use of spelling and grammar for editorials and commentaries, to not be exclusionary to people whose first language isn’t English, or those who are neuroatypical
  • Elsevier (who publish The Lancet) have developed confidential publisher-specific AI tools for editors, although these are not commonly used
  • AI is not preferred for peer review and AI-generated images are not recommended.

Both Flanagin and Bagenal agreed that the field is rapidly evolving and policies could all be subject to change.

AI tools to enhance publishing processes

In the next talk, Anannya Mohapatra (Springer Nature), described her role in using AI to enhance publishing processes and advance tools and strategies to support researchers, publishers, and HCPs.

Large language models (LLMs) and generative AI can transform the way we write, with many researchers using AI for research-focused and non-research related tasks. At Springer Nature, internal AI tools have been developed for editors and medical writers to aid content creation, which allows a ‘human-machine handshake’ with LLMs. The tools are currently used for:

  • research highlights – the first AI-generated research highlight was published by Nature India in January 2023
  • PrimeViews – AI-generated infographics that condense key information from an article into one page, with a relevant image
  • plain language summaries (PLS) – AI-enhanced PLS development strategy to tailor the summary to the general public or HCPs.

Although these tools streamline processes and reduce timelines, all content is verified and fact-checked by humans to ensure reliability and accuracy. This was described as a ‘human-AI partnership in medical writing’, with specific roles that are suited to AI versus a human. As with other publishers, AI is not accepted as a credited author at Springer Nature, and use of AI tools should be declared in the summary itself.

“AI is a means, not an end. [The] focus is always on the user and a human-centred approach to AI.” – Anannya Mohapatra

Concluding, Mohapatra described newly developed tools, including a tool for editorial summaries of articles for authors, and a tool that generates PLS of research for a pharma industry and HCP audience.

A clinician’s perspective

In the final presentation of the session, Adrian Mulligan (Elsevier) gave an insight into clinicians’ attitudes towards AI. The session covered highlights from an Elsevier 2024 study ‘Insights 2024: Attitudes toward AI’, which contained ~1000 responses from clinicians over 85 countries.

Awareness of AI was high among clinicians, with the majority saying they were familiar with AI in some form. Interestingly, only around half of clinicians had actually used AI. ChatGPT was by far the most well-known AI tool, although it was pointed out that as the study was performed 9 months ago, the findings today may shift due to the rapidly changing field. Results from the survey indicate that institutions need support in conveying their AI usage restrictions and preparations to clinicians.

The attitudes of clinicians towards AI are mixed, but are more positive than negative. Clinicians recognise clear opportunities for AI, including:

  • accelerated knowledge discovery
  • cost savings to institutions and businesses
  • increased work quality
  • increased time for higher value work.

The concerns listed by clinicians included the potential for misinformation or errors and the erosion of human critical thinking.

 

“Although there would be many benefits, it would also wreak havoc if false information were spread.” – Nurse (Mexico) respondent to the Insights 2024: Attitudes toward AI study

Clinicians believe that AI must be responsible, ethical, and transparent and they see many benefits of integrating AI into their workload, including for:

  • publishing – formatting a manuscript or finding a journal
  • research – writing code or checking data
  • using scientific content – literature searches or reviewing clinical evidence
  • teaching – looking for real-world examples or preparing teaching materials
  • clinical activities – reviewing a patient’s history or identifying correct treatment approaches
  • funding activities – identifying potential collaborators or writing a proposal.

The survey highlighted that clinicians want ‘guard rails’ around AI, and they don’t want the relationship with the patient to be lost.

Closing remarks

Delahunty brought the absorbing session to a close, encouraging EMWA members to empower themselves and be better informed and feel more confident when having discussions around AI with research authors, agencies, and clients alike as the field continues to evolve.

Why not read our summaries from the RWE and Medical Translation symposia here.

——————————————————–

Written as part of a Media Partnership between EMWA and The Publication Plan, by Aspire Scientific, a proudly independent medical writing and communications agency that believes in putting people first.

——————————————————–

 

Exit mobile version