
The 12th European Medical Writers Association (EMWA) symposium, entitled ‘AI in Medical Writing’ took place on 9 May. The symposium explored technological aspects of AI, ethical considerations, and showcased practical applications for medical writers and communications specialists. If you missed the morning session, you can catch up on the key themes with our summaries below, or get a quick refresher if you were in attendance!
You can read our summary of the afternoon session of the symposium here.
Harnessing AI for efficient systematic reviews in medical publications
KEY TAKEAWAY
- AI tools can assist with the different steps of developing systematic medical reviews; writers are encouraged to learn how these tools work to improve their workflows.
Sepanta Fazaeli (Stryker) presented the opening session of the symposium on how natural language processing (NLP) models can be used to expedite the development of systematic medical reviews. NLP models have evolved from traditional machine learning models (with no semantic understanding) and deep learning models that use neural networks (with some semantic understanding), to large language models (LLMs) such as Open AI’s ChatGPT, which offer powerful interpretation of text and the ability to better capture the nuances of human language.
NLP models have evolved as a powerful means for interpreting human language.
Fazaeli outlined a workflow for developing systematic reviews using AI:
- Query generation and retrieval: state of the art tools connect to a database, eg, PubMed, for data retrieval
- Screening: key studies are prioritised
- Appraisal and extraction: most often based on the abstract alone to reduce computational demands, with a focus on PICO elements (Patient/population, Intervention, Comparison, and Outcomes)
- Analysis and report generation: quantitative and qualitative analyses; PRISMA diagrams are updated as new studies are integrated
There are now multiple tools offering automation of some or all of these steps, though none are necessarily validated. Writers should select tools based on what they want to achieve, and query the validation, metrics, explainability, and the specificity and sensitivity of a tool before purchase or use.
Beyond the hype: 5 ways you can use your domain knowledge to supercharge your writing with AI
KEY TAKEAWAY
- Writers should use generative AI tools with an educational mindset and become skilled in prompting AI in order to obtain the desired output.
Avi Staiman (Academic Language Experts and SciWriter.ai) provided recommendations on how writers can make the most of AI tools using their own subject knowledge. Staiman emphasised that LLMs need to be guided in an iterative manner to achieve the best output.
Effective prompting is key to obtain the desired results from LLMs. When prompting, the following elements can be used to tailor the output:
- Role – who you want the AI model to be, eg, a scientist, a medical writer, or a patient
- Goal – what you are trying to achieve, eg, write an academic article
- Level – eg, lay text versus scientific writing
- Few-shot prompting – giving the tool examples, eg, “here is a good example of an introduction section of a randomised controlled trial”
- Personalisation – using specific instructions, eg, “only use papers from the last 5 years”
- Constraints – what you don’t want the tool to do, eg, “do not provide a summary”
- Iteration – repeating prompts in order to optimise the output
Staiman gave the following example of an effective prompt using some of these elements:
“You are a science writer [1] writing an article for the New England Journal of Medicine [3]. I want you to write an exhaustive literature review [2] on the topic of the main symptoms of colon cancer including gaps in the research. The literature review should focus on research published in the last few years [5]. Don’t include an introduction or conclusion [6].”
From ink to code – the evolution of medical writing in the AI era
KEY TAKEAWAY
- Medical writers should leverage generative AI tools as ‘copilots’, with documentation at each stage of the writing process, to avoid unintentionally propagating errors in scientific publications.
Ashish Uppala (Scite.ai) discussed medical writing in the era of AI. The arrival of generative AI tools has enabled writers to delegate a greater proportion of cognitive load than ever before. However, the human writer is still responsible for the thinking process – this was true historically when words were written physically on clay and then using pen and paper, and is still true today.
Rather than using generative AI tools as automated agents, which might increase the risk of bias and error propagation, Uppala encouraged writers to leverage these tools as copilots to optimise efficiency, with documentation at each step of the process. To avoid unintentionally propagating errors in publications, medical writers should use tools such as Scite.ai that ‘show their work’ by indicating the raw information used to generate an output. Uppala concluded his presentation by calling on medical writers to provide feedback to entrepreneurs to help improve generative AI tools for scientific publications.
The methods of written communication may have evolved, but humans are still responsible for the thinking process.
AI and IP: ramification for vendor, provider and customer agreements
KEY TAKEAWAY
- Medical writers should be aware of potential intellectual property issues when using AI tools. Writers and companies need to ensure they comply with all applicable laws when using AI.
Carlo Scollo Lavizzari’s (specialist in intellectual property protection) talk focussed on intellectual property (IP) in the context of AI. As the use of AI tools becomes more routine, organisations may need to update their legal contracts and agreements. IP legislation for AI is a highly evolving landscape, so companies need to be aware of changes to the law. There are IP considerations throughout the process of using AI, from the inputs used to prompt AI (which may contain existing IP), to the resulting outputs (which could infringe upon existing IP or constitute new IP). Protection of AI tools themselves using IP is also a current topic, with an increasing numbers of patent applications for AI-related inventions.
Lavizzari made the following recommendations for individuals and companies when using AI tools:
- understand and document all processes involving AI – eg, the tools and prompts used
- be aware of any restrictions from clients when using AI – eg, when AI tools cannot be used
- be aware of accuracy limitations of AI tools – they can often hallucinate
- know and follow the applicable laws – eg, the recent European Union (EU) AI Act
- follow guidelines and ethical best practices
- only agree to what you can do using AI, and try to state in vendor agreements what you will/will not do using AI tools
- document the responsibility falling upon the client – seek indemnity, warranty, and ‘hold harmless’ clauses
- think ‘insurance’.
Copyright and artificial intelligence: an overview of how they intersect
KEY TAKEAWAY
- LLMs are trained using vast amounts of copyrighted materials, and these materials are copied, stored, and recreated by the LLMs; collective licensing allows for the efficient utilisation of copyrighted materials by AI systems.
In his presentation, Victoriano Colodrón (Copyright Clearance Center) provided an overview of the basic principles of copyright and how copyright and generative AI intersect. Both economic and moral rights are implicated in the training of AI technologies. LLMs, for example, are trained using vast amounts of copyrighted materials, and these materials are copied, stored, and recreated by the LLMs.
Generative AI tools can infringe on copyright in two main ways:
- Ingesting copyrighted materials during training
- Producing output that contains identical or substantially similar material to the protected work
Although many countries have no specific AI-related laws and rely on existing statutes, a major development has been the recent approval of the EU AI Act, which emphasises that AI users need permission to utilise copyrighted work unless exceptions apply. Transparency is a key issue globally, and the EU AI Act will require generative AI providers to make a sufficiently detailed summary of copyrighted works used to train their systems publicly available, with a similar bill pending in the US.
A key question is how AI providers can obtain permissions to use protected works in their systems. Rather than using direct licensing from individual rightsholders, Colodrón recommended voluntary collective licensing, which involves the aggregation of rights from multiple rightsholders by a collective licensing organisation, which then collects royalties from users and distributes them to rightsholders. Collective licensing thus enables a faster and more convenient way for content users to gain access to rights.
Colodrón emphasised that outputs from generative AI tools are improved by the use of high quality, responsibly sourced copyrighted works, which increase accuracy and reduce bias, and thus it is in everyone’s best interests that AI is paired with respect for creators and copyright.
Generative AI tools can infringe on copyright by ingesting copyrighted materials and by producing output material that is identical or substantially similar to the original work.
Neurobiological roots of artificial intelligence
KEY TAKEAWAY
- Since the inception of AI in the 1950s, there have been rapid advances in building technologies that aim to mimic human intelligence.
Pawel Boguszewski (Nencki Institute of Experimental Biology) discussed the evolution of intelligence in biology and in artificial systems. Although there is no one clear definition of intelligence, there is a general agreement that it is based on the ability to learn and apply information. We now know that rather than being a reactive machine, the human brain is a predictive machine, which allows it to respond to the environment in real time. Neuroscientists are now trying to elucidate the parts of the brain responsible for predicting events.
Just as living intelligence has evolved over time, the concept of AI has progressed from the Turing test in the 1950s, through to today’s LLMs. Many modern AI tools, for example Google DeepMind’s AlphaGo Zero, and AlphaFold, use artificial neural networks that were designed to mimic the living brain. Recent discussions have gone as far as asking whether AI has gained consciousness. Boguszewski remarked that there are two competing schools of thought on the modern definition of consciousness. The first (‘global workspace theories’) defines mental states as being conscious when they are broadcast within a global workspace in which frontoparietal networks play a central hub-like role; using this definition, a machine could be built that is said to have consciousness. The second (‘integrated information theory’) states that consciousness is identical to the cause-effect structure of a physical system that specifies a maximum of irreducible integrated information; using this latter definition, a machine cannot be conscious. Boguszewski concluded his talk by drawing the audience’s attention to the impressive advancements that are taking place in both neuroscience and AI today, which may further improve understanding of our own intelligence.
Many modern AI tools use artificial neural networks that were designed to mimic the living brain.
Ethical considerations in AI-supported medical writing
KEY TAKEAWAY
- AI users should be aware of the ethical challenges and limitations associated with data-driven technologies.
In his talk, Mike Katell (The Alan Turing Institute) discussed ethics in the context of AI. Katell defined AI ethics as the set of tools for guiding responsible choices for the design, development, and deployment of digital technology. The SAFE-D (Sustainability, Accountability, Fairness, Explanaibility, Data stewardship) principles, for example, serve as a starting point to reflect upon the possible harms and benefits associated with data-driven technologies.
Katell highlighted several key challenges when considering ethics in AI:
- AI is not a single technology, but rather is an evolving concept that comprises multiple different tools
- Contemporary AI was developed originally for marketing purposes rather than for more demanding and strict fields such as medicine
- There are multiple decisions involved in the design, development, and use of generative AI systems that shape the outputs of these systems
- While some generative AI tools are highly supervised, other tools such as ChatGPT and Google’s Gemini are largely automated without human intervention, which makes it difficult to monitor how outputs are generated from a given input
- Generative AI tools are trained to produce plausible outputs rather than facts, and in this way can be thought of as highly complex ‘autocompletes’. ChatGPT, for example, is unable to solve some simple mathematical problems, and though Google’s Med-PaLM can provide accurate information in response to a query, this information is often incomplete
Key questions include who should be accountable if a system causes harm, and who should take responsibility for actions that cannot be explained – the AI company, the user, or the decision maker? Katell emphasised the need for caution around the claims of cost savings and enhanced capabilities made by AI tools in the long term, and highlighted some of the larger issues of AI at play, such as labour issues, the environmental costs of AI, and concentration of power by a small number of companies.
Users of AI technologies should be aware of the downsides of such tools from an ethical standpoint, in addition to the benefits that they bring.
Ethical challenges and considerations to implementing AI in healthcare; a Research Ethics Committee perspective
KEY TAKEAWAY
- There are additional ethical challenges that need to be addressed in clinical research studies that use AI technologies, including those surrounding data sharing, data bias, autonomy, and transparency.
Alison Rapley (Freelance Medical Writing Consultant) gave an overview of the ethical concerns that need to be considered in clinical research studies that utilise AI, such as studies that involve patient monitoring, or prediction or diagnosis of illness through digital health applications and platforms. Rapley identified the following potential issues:
- Sharing of patient data: considerations such as what level of data is necessary for the AI model being used or built, and how such data are stored and transmitted are important to address, but most importantly, it is critical to ensure that how data will be shared has been made clear to the participants of the study in order to retain patient trust
- Fairness, inclusiveness, and equity: data and AI algorithms should not be biased – many AI models are trained using biobank data, which are inherently biased towards particular patient groups
- Autonomy: human autonomy should supersede machine autonomy, and AI technologies should be used as tools rather than relied upon without human intervention
- Transparency, explainability, and intelligibility: the purpose and use of AI needs to be made clear to the study organisations and participants; AI technologies should be explainable to different audiences, eg, patients, developers, and regulators
- Risk/benefit ratio: safeguards should be put in place, especially when it comes to sensitive patient data, and just because you can use AI, it doesn’t mean you should
Just because you can use AI, it doesn’t mean you should.
Artificial intelligence: pharma view
KEY TAKEAWAY
- The benefits and risks of AI should be balanced to put patients first; it is more important than ever that patients have access to trustworthy information and data.
Uma Swaminathan (GSK) and Art Gertel (MedSciCom) co-presented a talk on AI from the perspectives of pharmaceutical companies and the general public. Swaminathan highlighted that patients, ethics, and trust should be at the centre of the pharmaceutical industry. AI can bring important benefits for patients, such as accelerated innovation and greater efficiency, and therefore faster approval of new treatments. However, these benefits need to be balanced with the risks, which include questions of accountability and explainability, and data privacy and data/algorithm bias.
AI should be human-centric, with human accountability. Company policies should be updated to ensure that they are fit for purpose for AI, and there should be proactive risk management and robust governance in place. Decisions should be made collectively and collaboratively, rather than by individuals, to ensure ethical practice.
Gertel emphasised the importance of patient trust in the context of AI. Healthcare decisions are no longer being made solely by the physician: many patients are now taking on the role of partners in their care decisions, by consulting technologies such as Google, and now AI, for information. It is more important than ever that patients have access to trustworthy material supporting the principles that healthcare is safe, effective, patient-centred, timely, efficient, and equitable.
It is more important than ever that patients have access to trustworthy material supporting the principles that healthcare is safe, effective, patient-centred, timely, efficient, and equitable.
Patient perspective on generative AI
KEY TAKEAWAY
- There are opportunities for generative AI tools to assist with each stage of the patient journey.
Mitchell Silva’s (Esperity & Patient Centrics) talk focussed on AI from the patient’s point of view. Silva noted that there are opportunities for generative AI tools to assist with patients’ needs at all stages of their journey, from earlier detection of symptoms and accelerated diagnosis, to better disease understanding and optimised treatment decisions. For example, patients can upload their medical files to ChatGPT to obtain lay information, and deep fake avatars can be used by time-poor physicians to educate patients and answer their questions in the patient’s language.
Silva urged caution regarding some of the potential negative effects of generative AI tools, namely data privacy, accuracy and reliability, and regulatory compliance (for example with General Data Protection Regulation).
Generative AI can assist patients with better understanding of their disease.
Why not read our summary of the afternoon session of the symposium.
——————————————————–
Written as part of a Media Partnership between EMWA and The Publication Plan, by Aspire Scientific, a proudly independent medical writing and communications agency that believes in putting people first.
——————————————————–
Categories
