Generative AI – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Tue, 12 Aug 2025 12:02:18 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Generative AI – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Can adopting AI tools unlock a new era of open science? https://thepublicationplan.com/2025/08/12/can-adopting-ai-tools-unlock-a-new-era-of-open-science/ https://thepublicationplan.com/2025/08/12/can-adopting-ai-tools-unlock-a-new-era-of-open-science/#respond Tue, 12 Aug 2025 12:02:16 +0000 https://thepublicationplan.com/?p=18198

KEY TAKEAWAY

  • Generative AI tools can simplify data sharing through automating metadata creation and flagging missed requirements, ultimately enhancing open science.

Artificial intelligence (AI) has proved transformative in scientific research, from experimental design to assisting publishers and streamlining peer review processes. But can it unlock access to research data, code, and protocols frequently lost behind digital and institutional walls? In a recent London School of Economics Impact Blog article, Niki Scaplehorn and Henning Schoenenberger, both at Springer Nature, describe how generative AI could play a pivotal role in reshaping how data are shared, potentially revolutionising open science.

Hurdles to data sharing

The COVID-19 pandemic marked a turning point for open science, with global collaboration and rapid data sharing accelerating breakthroughs. Yet, Scaplehorn and Schoenenberger highlight that there are still considerable challenges to data sharing:

  • a lack of consistent guidance and struggles to align with FAIR standards
  • confusing and overlapping data sharing policies
  • cultural barriers
  • a lack of recognition for data sharing, code publication, and protocol documentation in academia.

Springer Nature saw compliance with data sharing requirements jump from 51% to 87% simply by asking authors to justify why they hadn’t deposited data prior to article acceptance. Scaling this approach, however, demands time and manpower. According to Scaplehorn and Schoenenberger, here, generative AI shows potential.

How can AI benefit data sharing?

The authors call for a “product” mindset that treats AI open science tools as services designed around researchers’ needs, rather than top-down mandates or administrative burdens. Scaplehorn and Schoenenberger highlight that AI can benefit data sharing through:

  • automation of metadata creation
  • flagging missing documentation and overlooked requirements
  • suggesting best practices to improve workflows.

“Generative AI could play a pivotal role in reshaping how data are shared, potentially revolutionising open science.”

The path forward

Scaplehorn and Schoenenberger believe that adopting AI tools designed around authors’ needs will streamline the burdensome aspects of data sharing. Ultimately, this will benefit researchers, policymakers, and everyone who relies on access to scientific information through lowering the barriers to open science.

—————————————————

What do you think – can AI be used to increase data sharing?

]]>
https://thepublicationplan.com/2025/08/12/can-adopting-ai-tools-unlock-a-new-era-of-open-science/feed/ 0 18198
Embracing AI in publishing: a game-changer for peer review? https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/ https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/#respond Tue, 04 Mar 2025 09:40:02 +0000 https://thepublicationplan.com/?p=17332

KEY TAKEAWAYS

  • Publishers are embracing the use of GenAI to support the peer review process.
  • AI automation of onerous tasks in the publishing workflow will allow editors to spend more time on activities requiring human expertise.

Could artificial intelligence (AI) define the future of publishing? Publishers are beginning to embrace the use of generative AI (GenAI) to improve peer review processes and uphold research integrity. In an article for Research Information, Dave Flanagan, Senior Director of Data Science at Wiley, explores how GenAI is currently used in publishing and how its integration is enhancing innovation and efficiency for both authors and reviewers alike.

A vigilant approach to GenAI use

Flanagan notes that “AI assists people, it does not replace people”. This is reflected in Wiley’s framework to ensure that their AI tools remain human driven to maintain the integrity of the publication process. Collaboration between publishers and industry bodies such as the Committee for Publication Ethics (COPE) and the STM Association will help to establish guidelines and standards for GenAI usage.

What is the current guidance on the use of GenAI in publishing?

Authors:

  • must explicitly state any usage of GenAI in their paper
  • are responsible for the accuracy of GenAI-driven information, including correct referencing of supporting material
  • can employ tools to improve grammar and spelling
  • are prohibited from using GenAI for the production or alteration of original research data and results.

Reviewers:

  • must not upload manuscripts or manuscript content into GenAI tools that could use input data for training purposes, breaching confidentiality agreements
  • are permitted to use GenAI tools to improve the quality of written feedback within reports, but must maintain transparency when doing so.

“Using AI tools can free up time for editors to focus on areas demanding human expertise.”

How can AI benefit peer review?

Similar to Papermill Alarm, Wiley’s AI-powered Papermill Detection Service is a useful tool for the early detection of potentially fraudulent papers. Other AI tools in development aim to:

  • identify suitable peer reviewers
  • automate alternative journal suggestions for unsuitable manuscripts
  • streamline the formatting and reference checking process
  • enhance the discoverability of published research.

Using AI tools can free up time for editors to focus on areas demanding human expertise.

In the rapidly evolving world of AI, Flanagan believes its use is “integral to the future of peer review”. The author urges publishers and researchers alike to embrace these powerful tools responsibly, keeping the advancement of knowledge at the core.  

————————————————–

Do you believe that additional AI tools will improve the peer review process?

]]>
https://thepublicationplan.com/2025/03/04/embracing-ai-in-publishing-a-game-changer-for-peer-review/feed/ 0 17332
Meeting report: summary of Day 2 of the 2025 ISMPP European Meeting https://thepublicationplan.com/2025/02/13/meeting-report-summary-of-day-2-of-the-2025-ismpp-european-meeting/ https://thepublicationplan.com/2025/02/13/meeting-report-summary-of-day-2-of-the-2025-ismpp-european-meeting/#respond Thu, 13 Feb 2025 10:10:30 +0000 https://thepublicationplan.com/?p=17212

The 2025 European Meeting of the International Society for Medical Publication Professionals (ISMPP) was held in London on 27–29 January. The meeting, which was themed ‘Core Values for an Integrated Age’, saw a record-breaking 418 delegates in attendance.

A summary of the second day of the meeting is provided below to benefit those who were unable to attend the meeting, and as a timely reminder of the key topics covered for those who did.

A summary of the first day of the meeting can be found here.

Summaries of Day 2

Empowering patient voices in authorship: navigating barriers and enhancing support


KEY TAKEAWAYS

  • Patient authors provide valuable insights, but barriers like submission challenges, lack of support, and compensation concerns must be addressed.
  • Collaboration among publishers, industry, and advocacy groups is key to ensuring fair and meaningful inclusion in research.

Moderated by Stuart Donald (Krystelis), this parallel session addressed the challenges and opportunities surrounding patient involvement in medical publications. Ngawai Moss (independent patient advocate) and Laurence Woollard (On The Pulse) represented the patient author point of view, while Emma Doble (BMJ) and Rachel Kendrick (AstraZeneca) provided a publisher and industry perspective, respectively. Discussions focused on the barriers patient authors face, support mechanisms, and ethical considerations regarding compensation.

The patient journey to authorship

For many patient authors, the journey begins with advocacy or participation in clinical trials. However, the transition to formal authorship presents several hurdles. The complexity of the submission process can be overwhelming, requiring knowledge of formatting, peer review expectations, and revisions. Many patients lack mentorship, making it difficult to navigate rejections and feedback.

Time constraints also play a significant role. Many patient authors have health conditions, caregiving responsibilities, or professional commitments that limit their ability to engage fully in the writing process. Additionally, access to medical journals remains a major barrier, as many patients cannot afford subscription fees to read relevant research.

Support from publishers and industry

Publishers like BMJ have been leading the way in integrating patient voices, having published patient-authored articles for over 30 years. Their initiatives include patient advisory panels, editorial board representation, and author guidance to simplify the publication process. To further ease the journey, BMJ assigns dedicated contacts to patient authors, reducing the administrative burden of participation.

The industry perspective on patient authorship is evolving but remains inconsistent. According to Kendrick, companies recognise the value of patient perspectives but often lack standardised approaches to inclusion. Many organisations are now working to establish clearer guidelines and engage patients earlier in the research process, ensuring their voices shape publications from the outset rather than as an afterthought.

Many organisations are now working to establish clearer guidelines and engage patients earlier in the research process, ensuring their voices shape publications from the outset rather than as an afterthought.

Compensation and ethical considerations

The issue of compensating patient authors sparked debate, with Woollard highlighting concerns about accessibility,  and arguing that the elitism in academic publishing creates barriers for patient contributors. He advocated for financial reimbursement, particularly for industry-sponsored publications, and called for fair market value standardisation to ensure consistency in compensation. Providing the counterargument, Kendrick cautioned that direct payment for authorship could introduce bias and reputational risks, particularly in industry-funded research. Instead, she emphasised the importance of transparency and aligning compensation policies with ethical publishing standards.

Recognition and authorship tagging

There is no clear consensus on how to identify patient authors in medical literature. While some advocate for clear labelling to highlight patient contributions, others worry that ‘patient author’ tags could reinforce tokenism. One proposed solution is allowing multiple affiliations, recognising patient authors not just for their lived experience but also for their expertise in advocacy or research.

Some patient authors also prefer anonymous or pseudonymous contributions, protecting them from public scrutiny. To address this, the panel recommended early discussions between patient authors and collaborators to set expectations regarding authorship disclosure and acknowledgment.

The shape of things to come? Beyond the traditional manuscript (a balloon debate)


KEY TAKEAWAYS

  • An interactive debate saw the audience vote on the future of scientific communication.
  • AI, PLS, podcasts, and videos were proposed as alternative publication formats, but traditional manuscripts prevailed as the foundation of medical publishing.

Rethinking scientific publications: A balloon debate

In this parallel session, a dynamic balloon debate challenged the traditional scientific manuscript’s role in modern publishing. Although scientific papers have moved online, their core format has remained largely unchanged since 1665. Thought leaders advocated for alternative publication formats better suited to today’s digital landscape.

Alternative formats in medical communication
  • AI-generated content: Jason Gardner (Real Chemistry) introduced ‘GEMMA’ (Generates Every Medical Manuscript Artificially), arguing that AI could tailor scientific content for different audiences while maintaining the manuscript as a cornerstone.
  • PLS: Amanda Boughey (Envision Pharma Group) highlighted data showing high usefulness ratings of PLS among HCPs, emphasising that PLS enhance accessibility without compromising scientific integrity.
  • Podcasts & audio articles: Clare Cook (Adis) emphasised the flexibility of audio formats, allowing HCPs to absorb information on the go. Podcasts can incorporate expert voices, patient perspectives, and facilitate nuanced discussions while being peer-reviewed and indexed on PubMed.
  • Video explainers: Sam Cavana (Taylor & Francis) underscored the rise of visual media, particularly among younger HCPs. Video explainers can be used to effectively demonstrate mechanisms of action and provide quick, engaging access to complex data.
  • Traditional manuscripts: Erin Crocker (Real Chemistry) defended the traditional manuscript as the foundation of medical publishing. She argued that while alternative formats are valuable, they must be grounded in rigorous, peer-reviewed research.
The debate & final verdict

Following audience votes, AI and podcasts were eliminated first, followed by video explainers. The final debate centred on PLS versus traditional manuscripts. While PLS make scientific information more accessible, concerns were raised about maintaining scientific integrity in simplified formats. In the end, the traditional manuscript prevailed.

In her victory speech, Crocker acknowledged the value of integrating multiple formats to enhance scientific communication, advocating for a collaborative future where AI, PLS, podcasts, and videos complement, rather than replace, traditional manuscripts.

Erin Crocker acknowledged the value of integrating multiple formats to enhance scientific communication, advocating for a collaborative future where AI, PLS, podcasts, and videos complement, rather than replace, traditional manuscripts.

Interestingly, in a second running of this session, the audience reached a different conclusion, with PLS emerging as the winning format. This outcome highlights the evolving perspectives on how best to communicate scientific research in an increasingly digital world.

Making meetings better for all


KEY TAKEAWAY

  • Inclusion isn’t just about making congresses accessible—it’s about fostering connection and belonging for all attendees.

Recognising that there is still room to improve inclusivity at congresses, this parallel session tackled a critical issue: making scientific meetings accessible to all. The session featured perspectives from experts who discussed the barriers attendees face and the steps needed to improve accessibility and engagements.

Patient perspectives

Matt Eagles (Havas Lynx) shared his personal experiences, emphasising the challenge of feeling connected to the scientific data presented at congresses. He pointed out that accessibility is not just about attending, but also about engaging meaningfully. He recounted how his Parkinson’s makes it difficult to stand for lengthy periods at poster sessions. Simple solutions, such as offering audio descriptions, could bridge this gap. With around one-quarter of the UK population having a disability or alternative needs, improving accessibility would benefit a significant proportion of attendees. Eagles also highlighted how inclusive seating arrangements, such as circular tables instead of rows, discourage segregation and fosters a sense of collaboration.

With around one-quarter of the UK population having a disability or alternative needs, improving accessibility would benefit a significant proportion of attendees

Charlotte Rowan (Caudex) expanded on the issue, noting that economic constraints are also significant barriers for many attendees. Hybrid meetings offer a partial solution, enabling broader participation. She also emphasised that providing logistical support, such as childcare and nursing rooms, could ensure that professionals with caregiving responsibilities can attend. Rowan stressed that organisers often “don’t know what they don’t know,” making it essential to involve diverse voices, including patients, in event planning.

The discussion also highlighted social considerations. Eagles shared how small acts, such as someone offering to get him food at a buffet, made a profound difference in his experience of inclusion. However, significant challenges still remain. Caregiver needs was highlighted as a substantial barrier. Few congresses offer free tickets or subsidies for caregivers, leaving some patients facing double the cost, or simply unable to attend.

What can we do?

Cate Foster (Oxford PharmaGenesis), an author of the ‘Good Practice for Conference Abstracts and Presentations’, discussed plans to update these guidelines to include ED&I considerations. The revised guidelines will address practical aspects such as poster accessibility, with easy-to-implement changes like positioning QR codes at a wheelchair-friendly height.

The ISMPP organisers themselves shared their efforts to integrate accessibility considerations into their event planning. This year, ISMPP offered captioning services, they chose venues with good transport links, and avoided major religious and national holidays. The patient support programme, which provides travel assistance to patient advocates, was another successful step towards inclusivity.

Stephen Cutchins (Cvent) highlighted the importance of seeing accessibility as an investment, not a cost. Thoughtful planning increases attendance and engagement, ultimately benefiting event organisers. While virtual and hybrid formats offer accessibility benefits, they lack the networking advantages of in-person meetings. Future improvements could include better virtual networking tools, such as avatars that simulate in-person interactions.

Keynote: the compass within: staying true to core values amidst chaos


KEY TAKEAWAY

  • Our core values are shaped by stories we are told from childhood, but we must challenge our inherent beliefs to foster inclusivity—both in society and in AI development.

Wednesday’s keynote speaker Naomi Sesay, Head of Creative Diversity at Channel 4, discussed how we can stay true to our core values in a chaotic world, and explored how our morals can feed into AI.

How do we get our values?

Sesay believes that we’re hardwired to hear stories and they resonate whether we believe them or not. From childhood, we absorb our values through stories told to us at home, at school and by society generally. These stories can be the truth, half-truth, or even untrue, but we accept them through needing to belong to our community.

We’re hardwired to hear stories and they resonate whether we believe them or not. We absorb our values through stories told to us at home, at school and by society generally.

Challenging where truth comes from

Sesay highlighted that our understanding of the truth is based on Western education, but if we fail to seek knowledge from non-Western societies, we risk marginalising them to our detriment. For example, GraphCast is an AI global forecasting tool, which can predict global weather with immense accuracy but has difficulty predicting short-term changes in local weather. In contrast, indigenous communities around the world have developed systems of predicting local weather to a very high degree of accuracy. Could we learn something from them?  

Inclusivity is key for success

One ‘story’ Sesay pointed out that we are taught to accept is Darwin’s theory of evolution. We do not question his theory despite the fact that even he had doubts about certain aspects of it, and Sesay called to attention the original complete title of his famous book, On the Origin of Species:On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life”. She emphasised this as an example where we must question the stories we are told. We run with Darwin’s concept of ‘survival of the fittest’ in a ‘dog eat dog world’, whereas Sesay argued that nature works best in collaboration and harmony. Indeed companies that prioritise empathy and inclusivity allow their employees to stay true to their individual core values, and this feeling of inclusion fosters collaboration. She emphasised, however, that while companies and governments need to focus on inclusivity, the onus is also on the individual to evolve and challenge our core beliefs.

We need to teach AI inclusivity

“AI is not sentient yet. We are still in control, and we need to talk about ethics now.”

Focusing on how morals feed into a future where AI will become more a part of our world, Sesay highlighted that discriminatory ideas, which we absorb from the stories we are told from childhood, become imprinted in our neurology and are difficult to “unlearn”, much as riding a bike would be. Similarly, AI is currently a “toddler” and we need to be mindful that whatever we teach it now will be retained and impact how it learns. To illustrate this point, Sesay recalled how after giving AI a prompt to “create AI as a sentient being”, it generated a humanoid image with Caucasian features, seemingly by default. This, she believes, is due to AI being used predominantly by the Western world and shows that AI is already not representing all cultures and values equally. She reminds us, however, that AI is not sentient yet. We are still in control, and we need to talk about ethics now.        

Member research oral presentations

What about sex? A call to action for improved sex and gender reporting in industry-sponsored clinical research: results from a literature review


KEY TAKEAWAY

  • Enhancing adherence to SAGER guidelines in industry-sponsored trials is crucial for improving the relevance of research findings.

Liz Southey (The Salve Health) shared findings from a study assessing sex and gender reporting in clinical research. Despite their influence on disease progression, treatment response, and healthcare access, these factors are often underreported in industry-sponsored trials—limiting the relevance and applicability of findings.

Just 37% of journals mentioned the SAGER guidelines, and key checklist items were largely overlooked.

The study reviewed articles published between 2023 and 2024 to assess adherence to the Sex and Gender Equity in Research (SAGER) guidelines, introduced in 2016 to improve reporting standards. Of 252 screened studies, only 28 met the eligibility criteria. Alarmingly, just 37% of journals mentioned the SAGER guidelines, and key checklist items—such as defining sex and gender or analysing data by sex—were largely overlooked. Additionally, gender representation among authors was also imbalanced, with only 35% of identified authors being women.

These gaps in reporting risk exacerbating health disparities. For example, women in clinical trials experience twice the rate of adverse drug reactions compared to men, highlighting the need for better reporting of sex differences. Beyond health outcomes, the gender data gap also has significant economic implications. Research by the World Economic Forum suggests that closing this gap could unlock 75 million disability-adjusted life years and generate $1 trillion in annual global gross domestic product.

In closing, Southey emphasised the role of medical publication professionals in advocating for better reporting practices. Promoting awareness and adherence to SAGER guidelines can improve research inclusivity, making findings more applicable to diverse populations and ultimately enhancing healthcare outcomes.

Speaking with one voice: an integrated and innovative planning framework for clear and consistent communications


KEY TAKEAWAY

  • Use of an Integrated Medical Communication Plan fosters collaboration, consistency, and alignment in pharmaceutical communications, improving message clarity and engagement with healthcare professionals.

Debra Mayo (Otsuka) addressed the challenges of fragmented pharmaceutical communications, emphasising the need for a unified voice. She introduced an Integrated Medical Communication Plan (IMCP)—a strategy designed to enhance collaboration, maintain consistency, and ensure alignment across teams.

Recent data from Sermo’s HCP Sentiment Series highlights the importance of targeted communication: 81% of physicians prefer relevant, personalised information, and 72% are more likely to engage with such communications. However, inconsistent messaging between medical affairs and commercial teams often creates confusion, reducing clarity and impact.

The IMCP framework is built on four key principles:

  • Collaboration: breaking down silos to align messaging across teams.
  • Consistency: maintaining a unified scientific narrative across all channels.
  • Alignment: synchronising strategy and tactics through structured planning.
  • Integration: prioritising strategic value and audience engagement.

To develop and implement the IMCP, a core committee identified key challenges, including siloed teams and inconsistent messaging. Their solution? A centralised platform for information access and knowledge sharing.

They also developed practical tools—spreadsheets, Power BI dashboards, and strategic lexicons—to streamline communication, reduce redundancy, and boost efficiency. At the centre of this initiative is the IMCP dashboard, a central hub where teams can track, update, and refine communication in real time.

The Integrated Medical Communication Plan dashboard is a central hub where teams can track, update, and refine communication in real time.

By embracing an integrated approach, pharmaceutical companies can enhance engagement with healthcare professionals, improve message clarity, and strengthen their scientific voice—ultimately fostering more effective and impactful communication.

A pilot study evaluating the performance of a custom-built large language model-based app that uses reporting guideline items to generate manuscript abstracts


KEY TAKEAWAY

  • Conspectus, an AI-powered tool, enhances manuscript abstract preparation with accuracy and positive user feedback. Nonetheless, human validation remains essential.

Niall Harrison (OPEN Health) and colleagues, in collaboration with ARTEFACT, assessed whether Conspectus, a custom-built large language model (LLM)-based application that generates abstracts using reporting guidelines, could enhance the accuracy and appropriateness of manuscript abstracts.

Conspectus generated well-structured, accurate abstracts, and received positive user feedback, though human oversight remains essential.

The workflow followed a structured process:

  • Manuscript upload: users upload a manuscript and set key parameters (eg, study type).
  • Prompt generation: Conspectus creates a tailored prompt based on user input and relevant reporting guidelines.
  • Prompt review: users review and refine the proposed prompt structure.
  • Abstract drafting: Conspectus generates an abstract, which users then review and fact-check.

In this pilot study, users tracked their time and assessed usability, while reviewers evaluated abstract quality. The results were promising: 95% of users would recommend Conspectus, and 82% felt it improved abstract preparation. Adoption was swift—81% of users were ready to use Conspectus within 15 minutes, and 61% saw potential time savings. Accuracy was highest for results sections (98%) but lower for conclusions (78%). Appropriateness scores varied across sections, with 69% meeting expectations for introductions and 58% for results, highlighting the need for better prompt refinement and user training.

Limitations included lower accuracy for study types not well-represented in training data and analyses lacking dedicated reporting guidelines (eg, post-hoc clinical trial analyses). Improving briefing forms and prompt training could enhance performance, while future research should explore real-world applications and cases with greater time-saving potential.

How can we collaborate with authors to integrate AI in publication development?


KEY TAKEAWAY

  • Transparency is essential when integrating AI into the publication process.

The role of generative AI in medical publications is evolving. In this session, industry, agency, and publisher panellists discussed practical tips for AI integration, with a little help from some artificial friends.

The agency perspective

Nina Divorty (CMC Connect) highlighted that the perspective of authors is critical, as they have final responsibility for the publication. Results from an audience poll showed that most participants had not yet used AI in collaboration with authors. Divorty recommended early communication and active discussion with authors to obtain agreement per ICMJE criteria, as well as to confirm the target journal and clarify their guidelines around AI use.

The publisher’s perspective

Stephanie Preuss (Springer Nature) introduced four AI-generated personas (created using video AI video platform Colossyan) to illustrate different author attitudes towards AI:

  • The Anarchist: Pro-AI and experimental but may overlook guidelines.
  • The Anxious: Wary of AI, deeply concerned about accuracy and ethics.
  • The Apathetic: Lacks a deep understanding of AI but is agreeable to its use.
  • The Conscious Collaborator: Informed, cautious, and committed to ethical integration.

These personas broadly conformed to attitudes that audience poll participants had encountered in the workplace. Preuss noted that although authors have raised concerns about declaring AI use in publications, many researchers are already using AI for tasks such as translation, fraud detection, and plain language summaries. Preuss stressed that AI cannot be listed as an author, that transparency is key, and there remains a need for “a strong human handshake in the centre”.

“There remains a need for a strong human handshake in the centre [of AI integration].”

The industry perspective

James Dathan (AstraZeneca) acknowledged the huge potential of AI, but that authors deserve transparency around the extent of AI’s contribution to the work, as well as rigorous proof of the technology’s efficacy, or lack thereof. On this last point, Dathan stressed that negative data is also important, that there may be situations where AI use is not appropriate, and that “just because we can doesn’t mean we should”.

Wrapping up, all the panellists agreed that transparency, integrity, and accountability were vital as we enter this exciting new era of integrating AI into the development of medical publications. Revealingly, cautious and curious were the two most frequently occurring words in an audience word cloud poll.

The role of a medical publication professional in 2035: redundancy by robots?


KEY TAKEAWAY

  • In the next decade, the role of the medical publication professional may evolve significantly, but core values—ethical storytelling, transparency, research integrity, and effective content dissemination—will remain fundamental.

The future of medical publications: Embracing AI and upholding core values

In a session sponsored by Real Chemistry, moderator Mike Dixon (Healthcare Communications Association) guided participants through an exploration of the future role of medical publication professionals, focusing on how the integration of AI will shape their responsibilities. Reflecting on the past decade, Dixon prompted attendees to consider whether the fundamentals of their profession have shifted and how they might evolve by 2035.

Ann Gordon kicked off the discussion by addressing the potential day-to-day changes AI could bring and what professionals might seek from their roles in the future:

  • AI integration: From the advent of conversational AI like ChatGPT in 2022 to the possibility of autonomous agents, AI is set to become integral to daily tasks.
  • Technological advancements: The emergence of AI-powered tools, such as wearable devices providing instant information and portable virtual workspaces, will enhance storytelling capabilities and elevate data visualisation techniques.
  • Evolving influencer profiles: Professionals will need to collaborate with digitally savvy opinion leaders who have significant influence in the digital and social media landscapes.
  • Sustainability and accessibility: Utilising holographic technology for virtual meeting attendance can promote both sustainability and accessibility.

Gordon emphasised that while technology will evolve, core values like ethical storytelling, transparency, and unbiased information dissemination will remain constant. Medical publication professionals will play a crucial role in guiding healthcare providers toward trustworthy content.

Medical publication professionals will play a crucial role in guiding healthcare providers toward trustworthy content.

Considering the entry of Generation Alpha into the workforce by 2035, a poll revealed that most participants believe this cohort will experience digital fatigue and seek more human interaction to stay engaged and build strong working relationships.

Next up, Catarina Fernandes (Johnson & Johnson) offered a pharmaceutical industry perspective, highlighting potential future opportunities and challenges in areas such as job descriptions, technological adoption, evidence dissemination, and collaboration. Key takeaways included:

  • Adaptability: Professionals must be flexible, adept with new data forms, and open to innovative dissemination methods.
  • Ethical standards: Maintaining strict ethical standards involves ensuring transparency in research, upholding a robust peer review system, promoting inclusivity, avoiding bias, and fostering trust within the scientific community.

Hamish McDougall (Sage) discussed the publisher’s role in 2035, focusing on research integrity and content dissemination. McDougall noted that while content will become more flexible and audiences more diverse, the core responsibilities of publishers—ensuring research integrity and effectively disseminating content—remain unchanged.

Dixon concluded the session by stressing that while AI will not replace medical publication professionals, those unwilling to collaborate with AI may be surpassed by those who do.

Closing remarks, raffles, and poster awards

Chair of the Programme Committee, Mithi Ahmed-Richards, and Vice-chair, Catherine Elliott, concluded the 2025 European Meeting of ISMPP with reflections on this year’s theme, Core Values for an Integrated Age. They also announced and congratulated this year’s poster prize winners:

  • Most Reflective of Meeting Theme: Characteristics of qualitative-based patient experience data publications in rare diseases, neuroscience, and oncologySarah Thomas, Oleks Gorbenko, Jacqui Oliver, Catherine Elliott, Simon R. Stones, Charles Pollitt
  • Best Original Research & Most Visionary Research: Establishing a lay review panel to ensure medical research accessibilityOleks Gorbenko, Nathalie Cannella, Marta Moreno, Geoff Kieley, David Gothard, Jo Gordon, Sarah Thomas
  • Best Visual Communication: Speaking their language: Healthcare professionals’ use of plain language materials with patientsIsabel Katz, Alexa Holland, Hamish McDougall, Sarah J. Clements

Ahmed-Richards and Elliott extended their gratitude to the Meeting Programme Committee, presenters, sponsors, partners, and exhibitors for their contributions. Finally, they reminded attendees that registration is now open for the 21st Annual Meeting of ISMPP, taking place 12–14 May 2025 in Washington, DC.

Why not also read our summaries of Day 1 of the meeting?

——————————————————–

Written as part of a Media Partnership between ISMPP and The Publication Plan, by Aspire Scientific, an independent medical writing agency led by experienced editorial team members, and supported by MSc and/or PhD-educated writers.

——————————————————–

]]>
https://thepublicationplan.com/2025/02/13/meeting-report-summary-of-day-2-of-the-2025-ismpp-european-meeting/feed/ 0 17212
Meeting report: summary of the afternoon session of the 12th EMWA symposium on artificial intelligence in medical writing https://thepublicationplan.com/2024/06/11/meeting-report-summary-of-the-afternoon-session-of-the-12th-emwa-symposium-on-artificial-intelligence-in-medical-writing/ https://thepublicationplan.com/2024/06/11/meeting-report-summary-of-the-afternoon-session-of-the-12th-emwa-symposium-on-artificial-intelligence-in-medical-writing/#respond Tue, 11 Jun 2024 10:29:31 +0000 https://thepublicationplan.com/?p=16059

The 12th European Medical Writers Association (EMWA) symposium, entitled ‘AI in Medical Writing’ took place on 9 May. The symposium explored technological aspects of AI, ethical considerations, and showcased practical applications for medical writers and communications specialists. If you missed the afternoon session, you can catch up on the key themes with our summaries below, or get a quick refresher if you were in attendance!

You can read our summary of the morning session of the symposium here.

Librarians are essential in bridging the AI gaps


KEY TAKEAWAY

  • AI presents medical writers and the broader pharmaceutical industry with great opportunities but also valid concerns; library teams have an important role to play in supporting effective and ethical use of AI and promoting AI literacy.

In this session, Jill Shuman (Takeda) considered the proliferation of artificial intelligence (AI) tools and the opportunities and challenges they pose for medical writers. It is estimated that generative AI can bring $60–100 billion value to the pharmaceutical industry annually. AI is also a powerful tool in the face of ‘infobesity’ – with 2 new papers added on PubMed every minute, keeping up with the literature becomes ever more challenging.

There is a pressing need for tools that can assess and extract scientific information faster and deeper, with AI being particularly useful for applications including data extraction for systematic reviews and competitive intelligence. At the same time, AI raises a number of valid concerns, particularly with regards to ethical and copyright issues. Library teams within pharmaceutical organisations have already been looking at the issues for some time and have a critical role to play in supporting the appropriate adoption of AI. In particular, librarians can work to promote AI literacy, ensuring that use of AI tools is both effective and ethical. They can also support medical writers in developing the AI skills that will soon be a prerequisite for the job.

Library teams within pharmaceutical organisations have a critical role to play in supporting the appropriate adoption of AI.

AI-assisted tool for academic writing – supporting researchers in sharing knowledge


KEY TAKEAWAY

  • Application of AI to the process of creating a book can reduce the time to publication and the burden on authors and editors, but humans remain central to the process.

Vivien Bender (Springer Nature) described an innovative project that brought together authors, editors, and experts from across Springer Nature to develop a new academic book using generative AI. Creation of the book followed a design process approach, with the team drawing on AI support at each stage in the process. The process also followed 5 principles for the use of AI in publishing: dignity, respect, and minimising harm; fairness and equity; transparency; accountability; and privacy and data governance.

The experiment highlighted the importance of engaging an interdisciplinary team in the development process and that, while AI can be a valuable and powerful tool, humans remain central to the process, relying on their expertise on the subject matter and skills in areas such as high-quality editing. Humans must also continue to take ultimate responsibility for the content. The application of AI accelerated the publication process, making topical information available sooner and reducing the time demands on authors. By assisting authors in areas where they have less experience or skills, AI can also lower barriers for those looking to publish their work.

The experiment highlighted the importance of engaging an interdisciplinary team when incorporating AI into the writing process.

Translation in the era of AI


KEY TAKEAWAY

  • AI is a game-changer for translation services; human translators still have an important role to play but need to adapt and refine their skills to make effective use of AI in their role.

AI is having a dramatic impact on translation services, raising the question of whether human translation has a future in the face of rapidly advancing AI tools. Translator and conference interpreter Nora Díaz (Consultant Translator) described the arrival of AI as a game-changer, noting that, depending on the AI engine, machine translation can now rival human translation. Generative AI is widely available and can provide context-aware translations adapted to particular audiences. The potential benefits of AI translation include faster turnaround combined with improved accuracy.

The uptake of AI by translation companies has been rapid, driven by the need to remain competitive. The impact for translators has been mixed – while AI provides them with enhanced tools it also puts their job security at risk. However, companies are increasingly adopting a hybrid approach which retains the human translator as an essential element, with AI used to generate a ‘pre-translation’ which the human translator then refines through a very close edit and check. In this rapidly changing environment, translators need to reskill and upskill. In particular, translators should look to further their skills in developing AI prompts, which are critical to ensuring the quality of machine translations.

Translators should look to further their skills in developing AI prompts, which are critical to ensuring the quality of machine translations.

Structured content authoring


KEY TAKEAWAY

  • Generative AI is well suited to use alongside structured content authoring in the development of a range of clinical documents.

Mati Kargren (Parexel International) considered the application of AI and structured content authoring (SCA) to the development of clinical documents across the product lifecycle. SCA uses an approach in which information is broken down into components (eg, study design, patient characteristics, interventions, etc.) that can then be rearranged and reused across multiple documents. Benefits of the SCA approach can include increased consistency, faster turnaround times, reduced need for manual intervention, and improved tracking of content.

AI can be particularly effective where clear structures are in place. Structured content makes for more reliable AI training and, in turn, more reliable AI performance. At the same time, the clear and consistent structure of many clinical documents makes them well suited for generation by AI trained on the structured content.

AI can be particularly effective where clear structures are in place. Structured content makes for more reliable AI training and, in turn, more reliable AI performance.

Dispelling the myths of ChatGPT and misconceptions of AI for your medical writing


KEY TAKEAWAY

  • Generative AI and machine learning are not about to replace human medical writers, but medical writers need to adapt and learn how to use AI appropriately in their work.

Depending on where you look, AI is either our gateway to a golden age or a fast-track to unemployment and poverty. David Piester (Symbiance) looked to dispel some of the myths surrounding generative AI and its impact on medical writers. While concerns about the risks of AI are valid, fear is beginning to subside with growing familiarity. The reality of AI and machine learning is that, while it will have an important role to play, it will not replace medical writers or be able to write a complete clinical study report (CSR) on its own.

Medical writers need to work alongside AI, using it for specific tasks, such as structured and formatted processes, and for repetitive and data-heavy tasks. The requirement for closed-loop systems to ensure data privacy and intellectual property rights is a key barrier to the use of AI to generate CSRs and other documentation. The medical writer remains essential and in short supply. However, writers need to adapt to apply AI effectively and appropriately. As Piester noted: “AI will not replace you. A person who’s using AI will replace you.”

While concerns about the risks of AI are valid, fear is beginning to subside as familiarity with AI grows.

Empowering regulatory medical writers: leveraging tools to enhance your writing


KEY TAKEAWAY

  • Generative AI tools are available that offer seamless integration with other applications, and are best deployed in a small batch approach to specific tasks within the overall document development process.

Philip Burridge (Morula Health) considered the AI tools available to medical writers to assist and enhance their work. In particular, Burridge focused on Microsoft Copilot, based on its seamless integration with widely used Microsoft Office applications such as Word, PowerPoint, and Excel. Given the critical issue of data confidentiality, it was noted that prompts, responses, and data used within Copilot remain within the Microsoft 365 service boundary and can be locked down by particular users and accounts. However, this does not preclude Microsoft using your data in some way.

Benefits to medical writers of AI tools such as Microsoft Copilot include time savings related to mundane and repetitive tasks, accurate outputs, integration with other applications such as word processing and spreadsheet programmes, and data security on a user level. It was noted that quality outputs require quality inputs in the form of well-devised prompts, and that the best use of AI is for small batch work, applied to specific, structured tasks within the overall document development process.

Quality AI outputs require quality inputs in the form of well-devised prompts.

How will medical writers work with AI


KEY TAKEAWAY

  • Use of AI tools has the potential to allow medical writers to focus on the strategic elements of their role; while writers need to know how to use the tools, they do not need to understand in detail how they work.

It is clear that AI is driving a shift in the role of the medical writer. Julia Forjanic Klapproth (Trilogy Writing and Consulting) explored how medical writers can work together with AI and examined some common misconceptions concerning AI and medical writing. Firstly, Forjanic Klapproth countered the opinion that medical writers will need to be highly tech-savvy, noting that they will not need to understand in detail how AI tools work in order to use them. She used the analogy that most of us can drive a car successfully but relatively few understand the detailed workings of car motors. Learning to use AI tools should be no different from mastering other software tools.

Another misconception is that AI will remove the strategic element of the writer role. Conversely, Forjanic Klapproth argued that AI will make medical writers more strategic, freeing them from mundane and repetitive tasks to focus on guiding the direction of projects and gaining deeper insights from the data. The medical writer has a key role as the ‘architect’ and ‘story builder’ of the output, using their vision to steer the AI tool to a successful output. Use of AI should also reduce the potential for bias compared with humans when extracting and assessing data. Ultimately, AI should allow medical writers to get more done more quickly and with greater consistency and accuracy, at the same time as allowing them to focus on strategic tasks such as meaning and messaging.

AI should allow medical writers to get more done more quickly and with greater consistency and accuracy.

AI in regulatory medical writing – opportunities and challenges


KEY TAKEAWAY

  • Use of rules-based AI can substantially improve speed and efficiency of preparing regulatory documents, freeing medical writers from repetitive tasks to focus on strategic authoring.

Eishita Agarwal (GSK) looked at how innovative AI-driven systems are driving advances in regulatory medical writing. In particular, rules-based AI can be used to increase the speed and efficiency of developing documents such as study reports and clinical summaries. AI tools can also improve accuracy and consistency. Agarwal emphasised that AI is an enabling technology rather than a ‘magic bullet’ and needs to be deployed alongside other enablers within a multidisciplinary approach engaging all key stakeholders.

Practical experience of deploying rules-based AI to development of 10 CSRs demonstrated substantial reductions (~50%) in development time for 70% of the CSRs. For the remaining 30%, efficiencies were held back somewhat by resistance to changing mindsets and adopting new working practices. Use of AI is redefining the medical writer role, empowering writers to focus on strategic authoring while deploying technology to handle repetitive tasks.

Short intro to the EU AI Act and its impact


KEY TAKEAWAY

  • The European Union AI Act is a long overdue piece of legislation that aims to maximise the benefits of AI while mitigating the risks, and holds providers and deployers accountable for the ethical and risk-conscious implementation of AI.

The European Union (EU) AI Act is a ground-breaking piece of regulation that governs development and use of AI within the EU; it aims to promote human-centric and trustworthy AI at the same time as protecting health, safety, and fundamental rights, while still supporting innovation. The legislative framework around AI is a dynamic and rapidly evolving field, and Ward Neefs (Pfizer) overviewed key features of the AI Act.

The Act classifies the risks associated with AI into 4 categories – minimal, limited, high, and unacceptable – with healthcare falling within the high-risk category and thus requiring the strictest safeguards. It overlays existing regulations in areas such as medical devices and in vitro diagnostics, bringing with it some additional requirements. Neefs concluded that AI carries a high risk if it is applied without careful consideration of its limitations and potential for bias. However, if good data modelling practices are followed, machine learning has the potential to do more good than harm.

The EU AI Act aims to promote human-centric and trustworthy AI at the same time as protecting health, safety, and fundamental rights, while still supporting innovation.

How to create an effective prompt: a mandatory skill for the medical writers


KEY TAKEAWAY

  • Prompt engineering is becoming a mandatory skill for medical writers to learn how to make effective use of generative AI tools.

Namrata Singh (Turacoz Group) addressed the critical issue of creating prompts for AI tools and how this is becoming an essential skill for medical writers. Prompts are instructions entered into the AI interface and need to be engineered to yield precise, coherent, and pertinent responses. Prompts fall into a number of different categories but all require a considered and creative approach in order to achieve the best outputs.

Singh described the CLEAR Framework, which encapsulates 5 factors that are central to effective prompt engineering: Concise, Logical, Explicit, Adaptive, and Reflective. An understanding of the parameters that determine the effectiveness of prompts is helpful, but learning prompt engineering comes from exploring, interacting with the tools, and learning from mistakes.

Learning prompt engineering comes from exploring, interacting with the tools, and learning from mistakes.

Optimising medical content creation: a strategic framework for implementing generative AI


KEY TAKEAWAY

  • Development of a strategic framework for implementing generative AI can help to ensure optimal resource utilisation and timely adoption of new technology, as well as guiding medical content creators through the challenges of applying generative AI.

Generative AI is a transformative technology that comes with significant challenges and limitations. Keyur Brahmbhatt (Merck KGaA) presented a strategic framework for implementing generative AI for medical content creation, providing for rapid implementation and optimal resource utilisation.

Generative AI is a rapidly developing technology and comes with limitations including bias, ‘hallucinations’, and intellectual property and privacy concerns. A number of approaches are available to overcome or mitigate the limitations, ranging from quick, low-cost options such as prompt optimisation, to lengthy, high-cost options such as training new, customised large language models. A horizontal integration roadmap for applying generative AI across a range of medical contents can streamline efforts and investments when applying this rapidly advancing technology.

Summary and conclusions

AI technology is set to redefine the role of medical writers but not make them redundant. Applying AI to repetitive and mundane tasks can improve accuracy and consistency while accelerating the development of medical materials, freeing up writers for more strategic activities.

——————————————————–

Written as part of a Media Partnership between EMWA and The Publication Plan, by Aspire Scientific, a proudly independent medical writing and communications agency that believes in putting people first.

——————————————————–

]]>
https://thepublicationplan.com/2024/06/11/meeting-report-summary-of-the-afternoon-session-of-the-12th-emwa-symposium-on-artificial-intelligence-in-medical-writing/feed/ 0 16059
Meeting report: summary of the morning session of the 12th EMWA symposium on artificial intelligence in medical writing https://thepublicationplan.com/2024/06/07/meeting-report-summary-of-the-morning-session-of-the-12th-emwa-symposium-on-artificial-intelligence-in-medical-writing/ https://thepublicationplan.com/2024/06/07/meeting-report-summary-of-the-morning-session-of-the-12th-emwa-symposium-on-artificial-intelligence-in-medical-writing/#respond Fri, 07 Jun 2024 09:46:05 +0000 https://thepublicationplan.com/?p=16022

The 12th European Medical Writers Association (EMWA) symposium, entitled ‘AI in Medical Writing’ took place on 9 May. The symposium explored technological aspects of AI, ethical considerations, and showcased practical applications for medical writers and communications specialists. If you missed the morning session, you can catch up on the key themes with our summaries below, or get a quick refresher if you were in attendance!

You can read our summary of the afternoon session of the symposium here.

Harnessing AI for efficient systematic reviews in medical publications


KEY TAKEAWAY

  • AI tools can assist with the different steps of developing systematic medical reviews; writers are encouraged to learn how these tools work to improve their workflows.

Sepanta Fazaeli (Stryker) presented the opening session of the symposium on how natural language processing (NLP) models can be used to expedite the development of systematic medical reviews. NLP models have evolved from traditional machine learning models (with no semantic understanding) and deep learning models that use neural networks (with some semantic understanding), to large language models (LLMs) such as Open AI’s ChatGPT, which offer powerful interpretation of text and the ability to better capture the nuances of human language.

NLP models have evolved as a powerful means for interpreting human language.

Fazaeli outlined a workflow for developing systematic reviews using AI:

  1. Query generation and retrieval: state of the art tools connect to a database, eg, PubMed, for data retrieval
  2. Screening: key studies are prioritised
  3. Appraisal and extraction: most often based on the abstract alone to reduce computational demands, with a focus on PICO elements (Patient/population, Intervention, Comparison, and Outcomes)
  4. Analysis and report generation: quantitative and qualitative analyses; PRISMA diagrams are updated as new studies are integrated

There are now multiple tools offering automation of some or all of these steps, though none are necessarily validated. Writers should select tools based on what they want to achieve, and query the validation, metrics, explainability, and the specificity and sensitivity of a tool before purchase or use.

Beyond the hype: 5 ways you can use your domain knowledge to supercharge your writing with AI


KEY TAKEAWAY

  • Writers should use generative AI tools with an educational mindset and become skilled in prompting AI in order to obtain the desired output.

Avi Staiman (Academic Language Experts and SciWriter.ai) provided recommendations on how writers can make the most of AI tools using their own subject knowledge. Staiman emphasised that LLMs need to be guided in an iterative manner to achieve the best output.

Effective prompting is key to obtain the desired results from LLMs. When prompting, the following elements can be used to tailor the output:

  1. Role – who you want the AI model to be, eg, a scientist, a medical writer, or a patient
  2. Goal – what you are trying to achieve, eg, write an academic article
  3. Level – eg, lay text versus scientific writing
  4. Few-shot prompting – giving the tool examples, eg, “here is a good example of an introduction section of a randomised controlled trial”
  5. Personalisation – using specific instructions, eg, “only use papers from the last 5 years”
  6. Constraints – what you don’t want the tool to do, eg, “do not provide a summary”
  7. Iteration – repeating prompts in order to optimise the output

Staiman gave the following example of an effective prompt using some of these elements:

“You are a science writer [1] writing an article for the New England Journal of Medicine [3]. I want you to write an exhaustive literature review [2] on the topic of the main symptoms of colon cancer including gaps in the research. The literature review should focus on research published in the last few years [5]. Don’t include an introduction or conclusion [6].” 

From ink to code – the evolution of medical writing in the AI era


KEY TAKEAWAY

  • Medical writers should leverage generative AI tools as ‘copilots’, with documentation at each stage of the writing process, to avoid unintentionally propagating errors in scientific publications.

Ashish Uppala (Scite.ai) discussed medical writing in the era of AI. The arrival of generative AI tools has enabled writers to delegate a greater proportion of cognitive load than ever before. However, the human writer is still responsible for the thinking process – this was true historically when words were written physically on clay and then using pen and paper, and is still true today.

Rather than using generative AI tools as automated agents, which might increase the risk of bias and error propagation, Uppala encouraged writers to leverage these tools as copilots to optimise efficiency, with documentation at each step of the process. To avoid unintentionally propagating errors in publications, medical writers should use tools such as Scite.ai that ‘show their work’ by indicating the raw information used to generate an output. Uppala concluded his presentation by calling on medical writers to provide feedback to entrepreneurs to help improve generative AI tools for scientific publications.

The methods of written communication may have evolved, but humans are still responsible for the thinking process.

AI and IP: ramification for vendor, provider and customer agreements


KEY TAKEAWAY

  • Medical writers should be aware of potential intellectual property issues when using AI tools. Writers and companies need to ensure they comply with all applicable laws when using AI.

Carlo Scollo Lavizzari’s (specialist in intellectual property protection) talk focussed on intellectual property (IP) in the context of AI. As the use of AI tools becomes more routine, organisations may need to update their legal contracts and agreements. IP legislation for AI is a highly evolving landscape, so companies need to be aware of changes to the law. There are IP considerations throughout the process of using AI, from the inputs used to prompt AI (which may contain existing IP), to the resulting outputs (which could infringe upon existing IP or constitute new IP). Protection of AI tools themselves using IP is also a current topic, with an increasing numbers of patent applications for AI-related inventions.

Lavizzari made the following recommendations for individuals and companies when using AI tools:

  • understand and document all processes involving AI – eg, the tools and prompts used
  • be aware of any restrictions from clients when using AI – eg, when AI tools cannot be used
  • be aware of accuracy limitations of AI tools – they can often hallucinate
  • know and follow the applicable laws – eg, the recent European Union (EU) AI Act
  • follow guidelines and ethical best practices
  • only agree to what you can do using AI, and try to state in vendor agreements what you will/will not do using AI tools
  • document the responsibility falling upon the client – seek indemnity, warranty, and ‘hold harmless’ clauses
  • think ‘insurance’.

Copyright and artificial intelligence: an overview of how they intersect


KEY TAKEAWAY

  • LLMs are trained using vast amounts of copyrighted materials, and these materials are copied, stored, and recreated by the LLMs; collective licensing allows for the efficient utilisation of copyrighted materials by AI systems.

In his presentation, Victoriano Colodrón (Copyright Clearance Center) provided an overview of the basic principles of copyright and how copyright and generative AI intersect. Both economic and moral rights are implicated in the training of AI technologies. LLMs, for example, are trained using vast amounts of copyrighted materials, and these materials are copied, stored, and recreated by the LLMs.

Generative AI tools can infringe on copyright in two main ways:

  1. Ingesting copyrighted materials during training
  2. Producing output that contains identical or substantially similar material to the protected work

Although many countries have no specific AI-related laws and rely on existing statutes, a major development has been the recent approval of the EU AI Act, which emphasises that AI users need permission to utilise copyrighted work unless exceptions apply. Transparency is a key issue globally, and the EU AI Act will require generative AI providers to make a sufficiently detailed summary of copyrighted works used to train their systems publicly available, with a similar bill pending in the US.

A key question is how AI providers can obtain permissions to use protected works in their systems. Rather than using direct licensing from individual rightsholders, Colodrón recommended voluntary collective licensing, which involves the aggregation of rights from multiple rightsholders by a collective licensing organisation, which then collects royalties from users and distributes them to rightsholders. Collective licensing thus enables a faster and more convenient way for content users to gain access to rights.

Colodrón emphasised that outputs from generative AI tools are improved by the use of high quality, responsibly sourced copyrighted works, which increase accuracy and reduce bias, and thus it is in everyone’s best interests that AI is paired with respect for creators and copyright.

Generative AI tools can infringe on copyright by ingesting copyrighted materials and by producing output material that is identical or substantially similar to the original work. 

Neurobiological roots of artificial intelligence


KEY TAKEAWAY

  • Since the inception of AI in the 1950s, there have been rapid advances in building technologies that aim to mimic human intelligence.

Pawel Boguszewski (Nencki Institute of Experimental Biology) discussed the evolution of intelligence in biology and in artificial systems. Although there is no one clear definition of intelligence, there is a general agreement that it is based on the ability to learn and apply information. We now know that rather than being a reactive machine, the human brain is a predictive machine, which allows it to respond to the environment in real time. Neuroscientists are now trying to elucidate the parts of the brain responsible for predicting events.

Just as living intelligence has evolved over time, the concept of AI has progressed from the Turing test in the 1950s, through to today’s LLMs. Many modern AI tools, for example Google DeepMind’s AlphaGo Zero, and AlphaFold, use artificial neural networks that were designed to mimic the living brain. Recent discussions have gone as far as asking whether AI has gained consciousness. Boguszewski remarked that there are two competing schools of thought on the modern definition of consciousness. The first (‘global workspace theories’) defines mental states as being conscious when they are broadcast within a global workspace in which frontoparietal networks play a central hub-like role; using this definition, a machine could be built that is said to have consciousness. The second (‘integrated information theory’) states that consciousness is identical to the cause-effect structure of a physical system that specifies a maximum of irreducible integrated information; using this latter definition, a machine cannot be conscious. Boguszewski concluded his talk by drawing the audience’s attention to the impressive advancements that are taking place in both neuroscience and AI today, which may further improve understanding of our own intelligence.

Many modern AI tools use artificial neural networks that were designed to mimic the living brain.

Ethical considerations in AI-supported medical writing


KEY TAKEAWAY

  • AI users should be aware of the ethical challenges and limitations associated with data-driven technologies.

In his talk, Mike Katell (The Alan Turing Institute) discussed ethics in the context of AI. Katell defined AI ethics as the set of tools for guiding responsible choices for the design, development, and deployment of digital technology. The SAFE-D (Sustainability, Accountability, Fairness, Explanaibility, Data stewardship) principles, for example, serve as a starting point to reflect upon the possible harms and benefits associated with data-driven technologies.

Katell highlighted several key challenges when considering ethics in AI:

  1. AI is not a single technology, but rather is an evolving concept that comprises multiple different tools
  2. Contemporary AI was developed originally for marketing purposes rather than for more demanding and strict fields such as medicine
  3. There are multiple decisions involved in the design, development, and use of generative AI systems that shape the outputs of these systems
  4. While some generative AI tools are highly supervised, other tools such as ChatGPT and Google’s Gemini are largely automated without human intervention, which makes it difficult to monitor how outputs are generated from a given input
  5. Generative AI tools are trained to produce plausible outputs rather than facts, and in this way can be thought of as highly complex ‘autocompletes’. ChatGPT, for example, is unable to solve some simple mathematical problems, and though Google’s Med-PaLM can provide accurate information in response to a query, this information is often incomplete

Key questions include who should be accountable if a system causes harm, and who should take responsibility for actions that cannot be explained – the AI company, the user, or the decision maker? Katell emphasised the need for caution around the claims of cost savings and enhanced capabilities made by AI tools in the long term, and highlighted some of the larger issues of AI at play, such as labour issues, the environmental costs of AI, and concentration of power by a small number of companies.

Users of AI technologies should be aware of the downsides of such tools from an ethical standpoint, in addition to the benefits that they bring. 

Ethical challenges and considerations to implementing AI in healthcare; a Research Ethics Committee perspective


KEY TAKEAWAY

  • There are additional ethical challenges that need to be addressed in clinical research studies that use AI technologies, including those surrounding data sharing, data bias, autonomy, and transparency.

Alison Rapley (Freelance Medical Writing Consultant) gave an overview of the ethical concerns that need to be considered in clinical research studies that utilise AI, such as studies that involve patient monitoring, or prediction or diagnosis of illness through digital health applications and platforms. Rapley identified the following potential issues:

  1. Sharing of patient data: considerations such as what level of data is necessary for the AI model being used or built, and how such data are stored and transmitted are important to address, but most importantly, it is critical to ensure that how data will be shared has been made clear to the participants of the study in order to retain patient trust
  2. Fairness, inclusiveness, and equity: data and AI algorithms should not be biased – many AI models are trained using biobank data, which are inherently biased towards particular patient groups
  3. Autonomy: human autonomy should supersede machine autonomy, and AI technologies should be used as tools rather than relied upon without human intervention
  4. Transparency, explainability, and intelligibility: the purpose and use of AI needs to be made clear to the study organisations and participants; AI technologies should be explainable to different audiences, eg, patients, developers, and regulators
  5. Risk/benefit ratio: safeguards should be put in place, especially when it comes to sensitive patient data, and just because you can use AI, it doesn’t mean you should

Just because you can use AI, it doesn’t mean you should.

Artificial intelligence: pharma view


KEY TAKEAWAY

  • The benefits and risks of AI should be balanced to put patients first; it is more important than ever that patients have access to trustworthy information and data.

Uma Swaminathan (GSK) and Art Gertel (MedSciCom) co-presented a talk on AI from the perspectives of pharmaceutical companies and the general public. Swaminathan highlighted that patients, ethics, and trust should be at the centre of the pharmaceutical industry. AI can bring important benefits for patients, such as accelerated innovation and greater efficiency, and therefore faster approval of new treatments. However, these benefits need to be balanced with the risks, which include questions of accountability and explainability, and data privacy and data/algorithm bias.

AI should be human-centric, with human accountability. Company policies should be updated to ensure that they are fit for purpose for AI, and there should be proactive risk management and robust governance in place. Decisions should be made collectively and collaboratively, rather than by individuals, to ensure ethical practice.

Gertel emphasised the importance of patient trust in the context of AI. Healthcare decisions are no longer being made solely by the physician: many patients are now taking on the role of partners in their care decisions, by consulting technologies such as Google, and now AI, for information. It is more important than ever that patients have access to trustworthy material supporting the principles that healthcare is safe, effective, patient-centred, timely, efficient, and equitable.

It is more important than ever that patients have access to trustworthy material supporting the principles that healthcare is safe, effective, patient-centred, timely, efficient, and equitable.

 Patient perspective on generative AI


KEY TAKEAWAY

  • There are opportunities for generative AI tools to assist with each stage of the patient journey.

Mitchell Silva’s (Esperity & Patient Centrics) talk focussed on AI from the patient’s point of view. Silva noted that there are opportunities for generative AI tools to assist with patients’ needs at all stages of their journey, from earlier detection of symptoms and accelerated diagnosis, to better disease understanding and optimised treatment decisions. For example, patients can upload their medical files to ChatGPT to obtain lay information, and deep fake avatars can be used by time-poor physicians to educate patients and answer their questions in the patient’s language.

Silva urged caution regarding some of the potential negative effects of generative AI tools, namely data privacy, accuracy and reliability, and regulatory compliance (for example with General Data Protection Regulation).

Generative AI can assist patients with better understanding of their disease.

 Why not read our summary of the afternoon session of the symposium.

——————————————————–

Written as part of a Media Partnership between EMWA and The Publication Plan, by Aspire Scientific, a proudly independent medical writing and communications agency that believes in putting people first.

——————————————————–

]]>
https://thepublicationplan.com/2024/06/07/meeting-report-summary-of-the-morning-session-of-the-12th-emwa-symposium-on-artificial-intelligence-in-medical-writing/feed/ 0 16022
Creating an ironclad AI policy for healthcare communications: a guide from the HCA https://thepublicationplan.com/2024/06/04/creating-an-ironclad-ai-policy-for-healthcare-communications-a-guide-from-the-hca/ https://thepublicationplan.com/2024/06/04/creating-an-ironclad-ai-policy-for-healthcare-communications-a-guide-from-the-hca/#respond Tue, 04 Jun 2024 11:18:30 +0000 https://thepublicationplan.com/?p=15682

KEY TAKEAWAYS

  • In their latest guide on the use of AI in healthcare communications, the HCA makes recommendations for the development of a robust and clear AI policy.
  • AI policies should ensure the ethical, responsible, and transparent use of AI; the technology should be intended to support human work, rather than replace it.

In their 2023 position statement, The AI Roadmap, the Healthcare Communications Association (HCA) issued a call to action: “it’s time to act on AI”. Now, with the continued increase in generative AI use across healthcare communications, the HCA has issued further guidance on the creation of AI policies. This new guidance sits alongside the roadmap, providing insights on how to develop AI policies that ensure the responsible and ethical use of this powerful technology.

The HCA’s guide outlines key features and considerations for the development of a robust AI policy:

  • A clearly stated purpose: the aims of an AI policy must be clear and can range from providing rules on AI use to ensuring that AI supports human work rather than replacing it.
  • Which AI tools can be used: it should be very clear which AI tools are approved for use and which are not.
  • Ethical and legal considerations: policies should prohibit the uploading of confidential data to AI systems without permission. Exceptions may be the use of closed, proprietary systems, but this should be clearly stated in the policy.
  • Intellectual Property: as well as ensuring data protection, it is important that the use of AI-developed content does not infringe upon the intellectual property of others.
  • Training: organisations should inform employees, relevant stakeholders, and suppliers of their AI policy, and provide adequate training so that it is fully understood and implemented.
  • Accuracy and bias: to ensure compliance with ethical and regulatory codes, and to avoid inherent bias and discrimination, human oversight is required to evaluate, review, and edit all AI outputs, including citations.
  • Transparency: organisations should always be open about their use of AI, including declaring its use when it has a significant impact on communications outputs.

Considering the speed at which AI technology is evolving, the HCA advises that organisations review, update, and communicate their AI policy every 3 to 6 months. Moreover, the HCA encourages healthcare communication professionals to recognise the transformative potential of AI tools and embrace them responsibly, openly, and safely.

The HCA encourages healthcare communication professionals to recognise the transformative potential of AI tools and embrace them responsibly, openly, and safely.

————————————————–

Which aspect of an organisational AI policy would you find most useful?

]]>
https://thepublicationplan.com/2024/06/04/creating-an-ironclad-ai-policy-for-healthcare-communications-a-guide-from-the-hca/feed/ 0 15682
AI in scientific reporting: NASW’s position statement https://thepublicationplan.com/2024/05/21/ai-in-scientific-reporting-nasws-position-statement/ https://thepublicationplan.com/2024/05/21/ai-in-scientific-reporting-nasws-position-statement/#respond Tue, 21 May 2024 08:22:59 +0000 https://thepublicationplan.com/?p=15866

KEY TAKEAWAYS

  • NASW sets out its position on the use of AI, highlighting the importance of human writers and editors and the need for transparency.
  • NASW calls for members to follow these principles and for us all to remain vigilant in the use of AI to maintain integrity and accuracy in scientific reporting.

In the wake of organisations such as the International Society for Medical Publication Professionals (ISMPP) and Nature setting out their stance on the use of AI in medical publishing, the National Association of Science Writers (NASW) have now released their position statement on the use of generative AI tools.

Who are NASW?

NASW is a community of people who write and produce material intended to inform the public about science, health, engineering, and technology. At the forefront of NASW’s operating principles is their aim to “foster the dissemination of accurate information regarding science through all media normally devoted to informing the public”.

What is NASW’s position on AI?

In NASW’s statement, they highlight some of the current concerns around AI tools replacing human writers, including the potential for:

In light of these concerns, NASW go on to make the following commitments and recommend that members:

  • do not use generative AI tools to replace human writers and editors
  • do not support publication of content generated entirely by AI, without human input and oversight
  • do not use AI-generated images, except under very particular conditions and with safeguards in place
  • maintain transparency about the use of AI systems
  • support media unions in demanding worker protections and input into AI use.

What can you do?

NASW call on us all to “remain vigilant so that readers and writers alike can clearly distinguish between human- and algorithm-generated content”.

We must remain vigilant so that readers and writers alike can clearly distinguish between human- and algorithm-generated content.

————————————————–

What do you think – should we all be following the NASW guidelines to protect writers and the public from the potential pitfalls of AI?

]]>
https://thepublicationplan.com/2024/05/21/ai-in-scientific-reporting-nasws-position-statement/feed/ 0 15866
Discovering AI and the future of Medical Affairs: MAPS 2024 EMEA Annual Meeting https://thepublicationplan.com/2024/05/08/discovering-ai-and-the-future-of-medical-affairs-maps-2024-emea-annual-meeting/ https://thepublicationplan.com/2024/05/08/discovering-ai-and-the-future-of-medical-affairs-maps-2024-emea-annual-meeting/#respond Wed, 08 May 2024 08:36:50 +0000 https://thepublicationplan.com/?p=15671

The Medical Affairs Professional Society (MAPS) 2024 EMEA Annual Meeting will take place at the Meliá Castilla in Madrid from 12–14 May.

The final day of the meeting will feature a Closing Plenary Session on the transformative power of generative AI technologies in the field of Medical Affairs. The session will cover the ability of AI models such as ChatGPT to revolutionise the pharmaceutical industry and identify opportunities for medical affairs professionals to incorporate AI technology into their workflows. The session will provide attendees with:

  • A comprehensive understanding of foundational and generative AI models and their potential within Medical Affairs
  • Real-world use cases showing how AI is already being implemented in the industry
  • An awareness of the opportunities and potential pitfalls of AI so they are fully equipped to navigate the AI frontier

Register to attend today!


View the meeting agenda.

Find out more about MAPS at medicalaffairs.org

—————————————————–

]]>
https://thepublicationplan.com/2024/05/08/discovering-ai-and-the-future-of-medical-affairs-maps-2024-emea-annual-meeting/feed/ 0 15671
Will altmetrics need to evolve in the face of the AI revolution? https://thepublicationplan.com/2024/03/05/will-altmetrics-need-to-evolve-in-the-face-of-the-ai-revolution/ https://thepublicationplan.com/2024/03/05/will-altmetrics-need-to-evolve-in-the-face-of-the-ai-revolution/#respond Tue, 05 Mar 2024 15:40:56 +0000 https://thepublicationplan.com/?p=15196

KEY TAKEAWAY

  • As the volume of online content created by generative AI grows, altmetrics must adapt to this changing environment or risk becoming redundant.

Altmetrics, which measure the online attention that scholarly literature receives, have proliferated in recent years. Measures such as the Altmetric Attention Score (AAS) are now commonplace on journal webpages, alongside traditional citation analysis. However, as the digital landscape shifts towards content developed by generative artificial intelligence (AI), will altmetrics need to adapt or perish? A recent article by David Stuart for Research Information examined this conundrum.

Generative AI: the challenges for meaningful metrics

Stuart points to various challenges associated with the rapid increase in use of generative AI. Its capabilities can be harnessed to make previously time consuming tasks straightforward. Alongside the benefits this brings, there is an increased risk of:

  • purposeful metric manipulation – generative AI could be used to generate large volumes of online mentions, each appearing to originate from a separate user
  • AI being used to create and endorse work produced by paper mills
  • mentions becoming misattributed by AI to derivative, rather than original, works, skewing metrics data.

Adapt or perish

While it could be argued that this points to the possible demise of altmetrics, we can also look to the words of Okakura Kakuzo: “the art of life lies in constant readjustment to our surroundings”. To this end, Stuart suggests altmetrics will need to adapt if they are to keep pace with a new and changing online environment. For example, as more and more online content is developed by generative AI, one possibility is that social media will move towards the more widespread use of subscription models and verified accounts, with restrictions placed on content generated by other sources. Such changes could mean that a modified information base is available to feed into new or adapted metrics. As the online world continues to adjust to the capabilities of generative AI, so too must the way we use metrics.

Altmetrics will need to adapt if they are to keep pace with a new and changing online environment.

————————————————

Do you see your use of altmetrics changing in response to the potential online dominance of generative AI?

]]>
https://thepublicationplan.com/2024/03/05/will-altmetrics-need-to-evolve-in-the-face-of-the-ai-revolution/feed/ 0 15196
ChatGPT: the newest author of scientific research? https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/ https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/#respond Thu, 16 Nov 2023 13:44:00 +0000 https://thepublicationplan.com/?p=14487

KEY TAKEAWAY

  • A ‘ChatGPT-authored’ scientific paper highlights the promise and pitfalls of using AI in research and publications.

Use of artificial intelligence (AI) in scientific publishing seems inevitable. While the full capabilities of this fast-changing technology are yet to be determined, some in medical publishing have begun to explore ways to harness the potential of generative AI, while others urge caution and lament a lack of structured guidance. Recently, as reported by Gemma Conroy in Nature News, Professor Roy Kishony and his student, Tal Ifargan, provided new fuel for the debate, by asking ChatGPT to conduct research and write a paper from scratch.

Kishony and Ifargan used a ‘data to paper’ system, in which software acted as a ‘go between’ between humans and generative AI. This system automatically prompted ChatGPT to follow the steps of scientific research, from hypothesis generation to development of a scientific manuscript. In less than an hour, ChatGPT developed a study objective; wrote code to analyse a large, publicly available dataset; and drew conclusions based on its findings and existing literature, which it reported in a 19-page research article.

The study highlighted some promising aspects of incorporating AI into research and publication pathways, namely reduced timelines and the potential to quickly generate written summaries. However, it also shone a light on a number of limitations and risks:

  • False narratives: In this case, ChatGPT claimed to ‘address a gap in the literature’, although the subject (a link between diabetes risk and diet and exercise) was already well investigated.
  • Decrease in research quality: Kishony flagged the risks of generative AI leading to ‘p hacking’ or a flood of low-quality research papers.
  • Incapable of self-correction: Stephen Heard of Scientist Sees Squirrel also provided commentary and analysis on the limitations thrown up by the study, including generative AI’s lack of accuracy. Expert human intervention was required throughout, to spot and correct errors.
  • Regurgitating existing ideas: Heard also emphasised that generative AI creates content based on existing source material, thus perpetuating biases and reducing innovation and creativity.
  • Hallucinations: As explained by Jie Yee Ong in The Chainsaw, ‘hallucinations’ are a well-known problem with generative AI. This study was no exception, with ChatGPT generating fake citations despite access to the published literature. As Ong puts it, “for now, it is best not to treat everything ChatGPT spits out as gospel”.

Kishony and Ifargan’s carefully planned study allowed generative AI’s work to be checked for accuracy by human experts. Researchers agree that these human checks and balances remain essential to ensuring the credibility of scientific research and publications in which AI plays a role.

Researchers agree that human checks and balances remain essential to ensuring the credibility of scientific research and publications in which AI plays a role.

————————————————–

What do you think will be the biggest impact of using AI in the publication of scientific research?

]]>
https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/feed/ 0 14487