Systematic review – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 29 Jan 2025 16:23:34 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Systematic review – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 AI in SLRs: a tool, not a replacement https://thepublicationplan.com/2025/01/29/ai-in-slrs-a-tool-not-a-replacement/ https://thepublicationplan.com/2025/01/29/ai-in-slrs-a-tool-not-a-replacement/#respond Wed, 29 Jan 2025 16:23:32 +0000 https://thepublicationplan.com/?p=17112

KEY TAKEAWAYS

  • AI can enhance efficiency at every stage of SLR development, facilitating projects of scale that may previously have been unfeasible.
  • Use of AI in SLRs requires human oversight to ensure quality, transparency, reproducibility, and accuracy, with authors remaining accountable for their work.

As the demand for up-to-date systematic literature reviews (SLRs) grows, artificial intelligence (AI) is an increasingly appealing tool given its efficiency and ability to manage a vast evidence base. In their article for the International Society for Medical Publication Professionals (ISMPP), Polly Field, Thomas Rees, and Richard White highlight the benefits of AI in SLRs and key considerations for its use.

Benefits and pitfalls of AI

AI tools can streamline SLRs by analysing large datasets, summarising and grouping data, identifying patterns, and visualising findings – all in a fraction of the time it would take a team of researchers. However, careful attention must be given to how AI tools handle sensitive input data, including confidential content, copyrighted material, and personal information. Human validation remains essential to address potential inaccuracies, ‘hallucinations’, omissions, and bias produced by AI.

When and how should AI be used?

Whether and how to use AI in SLRs depends on the context. AI can help to:

  • frame research questions
  • optimise search strategies
  • screen studies
  • extract data
  • assess the quality of evidence, and
  • synthesise findings.

Different AI tools suit different stages, but the authors stress that all use of AI must adhere to strict principles of transparency, reproducibility, quality, and accuracy.

Medical publication professionals should familiarise themselves with existing guidance from the International Committee of Medical Journal Editors (ICMJE) and individual journal policies, as well as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines on disclosure of AI use. These policies expound the following principles:

  • All authors remain fully accountable for the quality and accuracy of their work, including when AI is involved.
  • Transparency is critical – both the methods and acknowledgment sections must clearly document how and where AI was applied.

The authors emphasise that human oversight is essential, ensuring AI supports rather than replaces expert judgement.

“Human oversight is essential, ensuring AI supports rather than replaces expert judgement.”

As AI embeds deeper into SLRs, the authors encourage medical publication professionals to explore the potential use of AI in their research, while adopting key principles to ensure robust, transparent, and high-quality reviews.

————————————————–

Have you used generative AI tools in your work?

]]>
https://thepublicationplan.com/2025/01/29/ai-in-slrs-a-tool-not-a-replacement/feed/ 0 17112
Overcoming bias in ‘overviews of reviews’: a spotlight on appraisal tools https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/ https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/#respond Tue, 30 Jul 2024 13:10:50 +0000 https://thepublicationplan.com/?p=16233

KEY TAKEAWAYS

  • ‘Overviews of systematic reviews’ are a feature of evidence-based decision making, but are only as strong as the individual reviews they include. Evaluating potential biases and the methodological quality of systematic reviews is therefore crucial.
  • A recent article examines 2 recommended systematic review assessment tools, AMSTAR-2 and ROBIS. While both have value, their use requires proper training, time, and know-how.

Synthesising evidence from multiple systematic reviews (also known as conducting an umbrella review or  ‘overview of reviews’) can form a key part of evidence-based decision making and treatment guidelines. However, conducting effective ‘overviews of reviews’ requires careful planning to minimise bias, which can be present at either a primary study or individual review level. In a recent BMJ Medicine methods primer, Carole Lunny and colleagues address the challenges of assessing and reporting bias in systematic reviews. The group offer a detailed examination of AMSTAR-2 and ROBIS, two recommended appraisal tools, and provide practical guidance for authors of ‘overviews of reviews’.

AMSTAR-2 versus ROBIS

The group compared key features of each tool.

AMSTAR-2:

  • 16-item checklist
  • focuses on the methodological quality of systematic reviews of healthcare interventions, including risk of bias
  • reportedly favoured for its quick and easy-to-use format
  • may be preferred for broad assessment of systematic review quality.

ROBIS:

  • domain-based tool
  • 19 items, aimed at identifying biases in systematic reviews
  • useful for pinpointing concerns in review conduct and assessing relevance
  • requires “more thoughtful assessment and time”
  • may be preferred for more nuanced assessments, or comparisons of risk of bias across multiple types of systematic reviews.

Standardising ‘overviews of reviews’

The authors call for a standardised approach to ‘overviews of reviews’ to enhance their credibility and value.

Regardless of the appraisal tool used, the authors call for a standardised approach to ‘overviews of reviews’ to enhance their credibility and value. They outline several key recommendations:

  • Report methodological quality or bias by item, domain, and overall judgement, focusing on outcomes.
  • Discuss risk of bias for each outcome.
  • Highlight any individual review methodological quality issues or potential biases as limitations of the ‘overview of reviews’.
  • Use ROBIS to subgroup reviews by risk of bias, identifying overemphasised findings and excluding high-risk reviews.

An expanding toolkit

Previously, the launch of  PRISMA-S provided much-needed guidance on reporting literature searches within systematic reviews, and Cochrane’s Hilda Bastian proposed solutions to ensure that systematic review protocols were robust. Now, Lunny and colleagues’ primer, and the tools therein, sit alongside initiatives from the LATITUDES Network to form part of a drive to reduce bias in evidence synthesis.

————————————————–

Do you use a specific tool(s) when synthesising evidence from systematic reviews?

]]>
https://thepublicationplan.com/2024/07/30/overcoming-bias-in-overviews-of-reviews-a-spotlight-on-appraisal-tools/feed/ 0 16233
Are ‘living’ literature reviews the future of guideline development? https://thepublicationplan.com/2022/07/06/are-living-literature-reviews-the-future-of-guideline-development/ https://thepublicationplan.com/2022/07/06/are-living-literature-reviews-the-future-of-guideline-development/#respond Wed, 06 Jul 2022 11:11:35 +0000 https://thepublicationplan.com/?p=11784

KEY TAKEAWAYS

  • ‘Living’ evidence synthesis requires researchers to find, appraise, and incorporate research into guidelines in frequent cycles.
  • This approach is vital in fast-paced areas of medical research, such as COVID-19, to ensure guidelines remain a trusted source of the best evidence.

The COVID-19 pandemic demanded new ways of working in many aspects of healthcare; one such area was the development of guidelines relating to COVID-19 that could keep up with a continual influx of new data. A group of researchers tackled this challenge by undertaking a ‘living’ review to guide weekly updates to Australia’s national COVID-19 guidelines – they present their experience in an article in Nature.

The traditional model of guideline development is underpinned by systematic literature reviews. These are often developed from scratch and become out of date shortly after publication because of the inability to incorporate relevant new evidence as it becomes available. By contrast, the ‘living’ evidence approach requires researchers to continually monitor the literature and develop regular evidence summaries to keep abreast with newly released data. Guidelines based on ‘living’ reviews are more likely to stay up to date and relevant, which is crucial if these are to continue to be a trusted source of best available evidence to clinicians.

‘Living’ evidence synthesis is often facilitated by artificial intelligence mechanisms, such as natural-language processing, and machine-readable FAIR (findable, accessible, interoperable, and reusable digital assets) research data and publication metadata. Crowdsourcing and collaboration can alleviate the human time burden, reducing duplication and redundancy. The authors highlight MAGICapp, a platform facilitating the development of ‘living’ guidelines, and point to initiatives such as COVID-END (COVID-19 Evidence Network to support Decision-making) as an example of how collaborative working can achieve rapid, efficient, and full coverage of evidence synthesis.

“Without trustworthy and up-to-date summaries, the world risks making ill-informed decisions and wasting investment.”

Crucially, multiple updates to the same guideline document will require a more flexible approach to publication and the ‘version of record’. The authors recommend allowing minor updates as addendums to the original publication, with major updates captured in a new article version (along with associated digital object identifier and bibliographic database listing) that is robustly linked to previous versions.

A dynamic approach to evidence synthesis is increasingly accepted by well-known healthcare guiding bodies, including the National Institute for Health and Care Excellence, World Health Organization, and Cochrane, as well as publishers such as The BMJ and Annals of Internal Medicine. However, the authors caution that ‘living’ evidence will only speed up the incorporation of science into policy and practice if it is effectively implemented, and its application beyond the COVID-19 pandemic will require testing across diverse domains.

—————————————————–

What do you think – will ‘living’ reviews replace traditional systematic literature reviews?

]]>
https://thepublicationplan.com/2022/07/06/are-living-literature-reviews-the-future-of-guideline-development/feed/ 0 11784
Is health research no longer to be trusted? https://thepublicationplan.com/2022/01/28/is-health-research-no-longer-to-be-trusted/ https://thepublicationplan.com/2022/01/28/is-health-research-no-longer-to-be-trusted/#respond Fri, 28 Jan 2022 15:16:34 +0000 https://thepublicationplan.com/?p=10640

KEY TAKEAWAYS

  • Approximately 20% of clinical trials are thought to be false.
  • Cochrane published guidance on how to manage potentially problematic studies in their systematic literature reviews.
  • Dr Richard Smith suggests it’s time to assume all trials are untrustworthy unless proven otherwise.

With increasing evidence that scientific fraud is widespread, Cochrane has published a policy for managing untrustworthy clinical trials in the context of systematic literature reviews. However, Dr Richard Smith, cofounder of the Committee on Medical Ethics (COPE) and member of the board of the UK Research Integrity Office, suggests that it is time to go a step further and assume that all research is fraudulent until proven otherwise.

In a BMJ opinion piece, Smith outlines evidence from research leaders who, in their own investigations, found that many studies underlying systematic reviews were fatally flawed or contained false data. Professor Ben Mol, leader of the Evidence-Based Women’s Health Care Research Group at Monash University, estimates that 20% of trials are false. Availability of individual patient data increases the likelihood of detecting fraud, with one study showing that up to 44% of examined trials were untrustworthy.

Cochrane’s policy provides guidance for dealing with these ‘potentially problematic’ trials, including:

  • retracted studies
  • studies with a published Expression of Concern
  • studies where there are serious questions about trustworthiness of data or findings but no formal post-publication amendment.

However, as noted in an editorial accompanying the policy, the scope of problematic studies is wide-ranging and there is no validated method to identify them (although tools such as the REAPPRAISED checklist can be useful). As more evidence becomes available and consensus emerges in this area, the guidance will need to be updated.

The scope of problematic studies is wide-ranging and there is no validated method to identify them.

With the risk of medical research fraud ultimately leading to patients being given inappropriate treatment, Smith concluded that it may be time to move away from trusting research is honest and reliable to assuming it is untrustworthy until there is evidence to the contrary.

—————————————————–

What do you think – should medical research be assumed untrustworthy until proven otherwise?

]]>
https://thepublicationplan.com/2022/01/28/is-health-research-no-longer-to-be-trusted/feed/ 0 10640
PRISMA-S: guidance for reporting literature search methods for systematic reviews published https://thepublicationplan.com/2021/05/26/prisma-s-guidance-for-reporting-literature-search-methods-for-systematic-reviews-published/ https://thepublicationplan.com/2021/05/26/prisma-s-guidance-for-reporting-literature-search-methods-for-systematic-reviews-published/#respond Wed, 26 May 2021 12:05:25 +0000 https://thepublicationplan.com/?p=8930

Systematic reviews and meta-analyses play a crucial role in evidence-based medicine, combining data from multiple studies on a topic to arrive at more robust conclusions than if individual studies are considered in isolation. However, it’s possible that poorly conducted literature reviews introduce bias into the findings and undermine the validity of systematic reviews. The lack of consensus guidelines on the transparent reporting of literature searches compounds this problem and has led to the development and recent publication of the Preferred Reporting Items for Systematic reviews and Meta-Analyses literature search (PRISMA-S) extension to the PRISMA Statement.

PRISMA-S was published in Systematic Reviews by Melissa Rethlefsen and colleagues. Designed to complement the PRISMA Statement and its existing extensions, the checklist of 16 items provides consensus-based guidance on reporting the literature search components of systematic reviews under the following headings:

  • information sources and methods
  • search strategies
  • peer review
  • managing records.

The checklist is designed for use in all fields of research and to cover the whole range of literature review types including scoping reviews, mixed methods reviews and metanarrative reviews. Importantly, PRISMA-S also provides guidance on reporting searches of sources other than literature databases, such as web search engines and study registries, for which there is little existing guidance.

The authors hope that PRISMA-S will be adopted by researchers – and by journals as part of the peer review process – to promote greater transparency and reproducibility of systematic literature reviews.

With the checklist available and a webinar planned to discuss how best to implement PRISMA-S, the authors hope that PRISMA-S will be adopted by researchers – and by journals as part of the peer review process – to promote greater transparency and reproducibility of systematic literature reviews.

After reading the PRISMA-S article, click here for a brief survey and to receive your authorization code for your Credit Tracker. This serves as documentation for the activity.

——————————————————–

Do you think PRISMA-S will help with the reporting of literature searches in systematic reviews?

——————————————————–


]]>
https://thepublicationplan.com/2021/05/26/prisma-s-guidance-for-reporting-literature-search-methods-for-systematic-reviews-published/feed/ 0 8930